From: | "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | "Milan Zamazal *EXTERN*" <pdm(at)brailcom(dot)org>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Large tables, ORDER BY and sequence/index scans |
Date: | 2010-01-05 13:35:06 |
Message-ID: | D960CB61B694CF459DCFB4B0128514C20393810B@exadv11.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Milan Zamazal wrote:
> My problem is that retrieving sorted data from large tables
> is sometimes
> very slow in PostgreSQL (8.4.1, FWIW).
>
> I typically retrieve the data using cursors, to display them in UI:
>
> BEGIN;
> DECLARE ... SELECT ... ORDER BY ...;
> FETCH ...;
> ...
>
> On a newly created table of about 10 million rows the FETCH command
> takes about one minute by default, with additional delay during the
> contingent following COMMIT command. This is because PostgreSQL uses
> sequence scan on the table even when there is an index on the ORDER BY
> column. When I can force PostgreSQL to perform index scan (e.g. by
> setting one of the options enable_seqscan or enable_sort to off), FETCH
> response is immediate.
>
> PostgreSQL manual explains motivation for sequence scans of large tables
> and I can understand the motivation. Nevertheless such behavior leads
> to unacceptably poor performance in my particular case. It is important
> to get first resulting rows quickly, to display them to the user without
> delay.
Did you try to reduce the cursor_tuple_fraction parameter?
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Grzegorz Jaśkiewicz | 2010-01-05 13:38:03 | Re: Large tables, ORDER BY and sequence/index scans |
Previous Message | Milan Zamazal | 2010-01-05 13:31:26 | Re: Large tables, ORDER BY and sequence/index scans |