| From: | "Merlin Moncure" <merlin(dot)moncure(at)rcsonline(dot)com> |
|---|---|
| To: | "Joost Kraaijeveld" <J(dot)Kraaijeveld(at)Askesis(dot)nl> |
| Cc: | "Pgsql-Performance (E-mail)" <pgsql-performance(at)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Subject: | Re: Retry: Is this possible / slow performance? |
| Date: | 2005-02-07 19:52:53 |
| Message-ID: | 6EE64EF3AB31D5448D0007DD34EEB3412A7613@Herge.rcsinc.local |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
> >> The best solution is probably to put a LIMIT into the DECLARE
CURSOR,
> >> so that the planner can see how much you intend to fetch.
> I assume that this limits the resultset to a LIMIT. That is not what I
was
> hoping for. I was hoping for a way to scrolll throught the whole
tables
> with orders.
>
> I have tested, and if one really wants the whole table the query with
"set
> enable_seqscan = on" lasts 137 secs, the query with "set
enable_seqscan =
> off" lasts 473 secs, so (alas), the planner is right.
>
> I sure would like to have ISAM like behaviour once in a while.
Then stop using cursors. A few months back I detailed the relative
merits of using Cursors v. Queries to provide ISAM like functionality
and Queries win hands down. Right now I am using pg as an ISAM backend
for a relatively old and large COBOL ERP via a C++ ISAM driver, for
which a publicly available version of the source will be available Real
Soon Now :-).
Merlin
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Paul Johnson | 2005-02-07 22:25:17 | Solaris 9 tuning |
| Previous Message | Joost Kraaijeveld | 2005-02-07 19:27:18 | Re: Retry: Is this possible / slow performance? |