Re: slow select in big table

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: rafalak <rafalak(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: slow select in big table
Date: 2009-04-03 02:48:15
Message-ID: dcc563d10904021948t7e21a04bn8885dfbbcb1531c5@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Apr 2, 2009 at 2:48 PM, rafalak <rafalak(at)gmail(dot)com> wrote:
> Hello i have big table
> 80mln records, ~6GB data, 2columns (int, int)
>
> if query
> select count(col1) from tab where col2=1234;
> return low records (1-10) time is good 30-40ms
> but when records is >1000 time is >12s
>
>
> How to increse performace ?
>
>
> my postgresql.conf
> shared_buffers = 810MB
> temp_buffers = 128MB
> work_mem = 512MB
> maintenance_work_mem = 256MB
> max_stack_depth = 7MB
> effective_cache_size = 800MB

Try lowering random_page_cost close to the setting of seq_page_cost
(i.e. just over 1 on a default seq_page_cost) and see if that helps.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2009-04-03 02:54:34 Re: reducing IO and memory usage: sending the content of a table to multiple files
Previous Message Craig Ringer 2009-04-03 02:19:04 Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] ERROR: XX001: could not read block 2354 of relation...