From: | David Rysdam <drysdam(at)ll(dot)mit(dot)edu> |
---|---|
To: | Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Very slow queries on 8.1 |
Date: | 2005-11-17 17:38:28 |
Message-ID: | 437CC014.60400@ll.mit.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Michael Fuhr wrote:
>On Thu, Nov 17, 2005 at 10:38:50AM -0500, David Rysdam wrote:
>
>
>>I'm porting an application from Sybase and I've noticed that similar
>>application functions take 2 to 3 times longer on postgres than they
>>used to on the same machine running under Sybase. I've tried changing
>>various "performance tuning" parameters, such as shared_buffers,
>>effective_cache_size, etc but there's little or no effect.
>>
>>
>
>What's your OS and hardware environment and what are your non-default
>PostgreSQL settings?
>
>
>
Fedora Core 2, dual 2.8 GHz, 2 GB ram.
shared_buffers = 10000
effective_cache_size = 100000
>>Right now, I'm working on a test case that involves a table with ~360k
>>rows called "nb.sigs". My sample query is:
>>
>>select * from nb.sigs where signum > 250000
>>
>>With no index, explain says this query costs 11341. After CREATE INDEX
>>on the signum field, along with an ANALYZE for nb.sigs, the query costs
>>3456 and takes around 4 seconds to return the first row.
>>
>>
>
>Please post the EXPLAIN ANALYZE output for the query -- that'll
>show us the query plan, the accuracy of the planner's row count
>estimate, and how long the query takes to execute on the server.
>It might also be useful to see the table definition and the output
>of the following query:
>
>SELECT null_frac, n_distinct, correlation
>FROM pg_stats
>WHERE schemaname = 'nb' AND tablename = 'sigs' AND attname = 'signum';
>
>
>
The first things would be problematic to supply, since they are actually
on a computer that doesn't have access to the Internet or to the machine
I'm writing this on. As for the query:
Row null_frac n_distinct correlation
1 0 -1 1
>What client interface are you using? If the query returns a lot
>of rows then you might benefit from using a cursor to fetch the
>result set in chunks; otherwise the client library is probably
>waiting for the entire result set to arrive before returning any
>rows to you.
>
That does seem to be the problem. I've never worked with cursors, so
I'll have to see if I can fit our DB module into that mold.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-11-17 17:43:34 | Re: strange behavior on 8.1 |
Previous Message | Steve Atkins | 2005-11-17 17:33:11 | Re: Rebranding PostgreSQL |