From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "David Parker" <dparker(at)tazznetworks(dot)com> |
Cc: | "postgres general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: explicit cursor vs. for loop in pl/pgsql |
Date: | 2005-05-17 19:38:43 |
Message-ID: | 1947.1116358723@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"David Parker" <dparker(at)tazznetworks(dot)com> writes:
> I know from the documentation that the FOR implicitly opens a cursor,
> but I'm wondering if there would be any performance advantages to
> explicitly declaring a cursor and moving through it with FETCH commands?
AFAICS it'd be exactly the same. Might as well stick with the simpler
notation.
> I have to use the ORDER BY, so I imagine I'm taking the hit of
> processing all the records in the table anyway, regardless of how many I
> ultimately fetch.
Not if the ORDER BY can be implemented using an index. Perhaps what you
need is to make sure that an indexscan gets used.
> The nature of the data is that chunksize doesn't necessarily match up
> one-for-one with rows, so I can't use it as a LIMIT value.
Can you set an upper bound on how many rows you need? If you can put a
LIMIT into the select, it'll encourage the planner to use an indexscan,
even if you break out of the loop before the limit is reached.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Parker | 2005-05-17 19:51:42 | Re: explicit cursor vs. for loop in pl/pgsql |
Previous Message | Hrishikesh Deshmukh | 2005-05-17 19:11:35 | Logs |