From: | "David Parker" <dparker(at)tazznetworks(dot)com> |
---|---|
To: | "postgres general" <pgsql-general(at)postgresql(dot)org> |
Subject: | explicit cursor vs. for loop in pl/pgsql |
Date: | 2005-05-17 18:48:25 |
Message-ID: | 07FDEE0ED7455A48AC42AC2070EDFF7C74623D@corpsrv2.tazznetworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I need to process a large table a few "chunks" at a time, commiting in
between chunks so that another process can pick up and start processing
the data.
I am using a pl/pgsql procedure with a "FOR rec in Select * from tab
order by...." statement. The chunksize is passed in to the procedure,
and in the FOR loop I iterate until I reach chunksize. The procedure
then returns and the calling code issues the commit, etc.
I know from the documentation that the FOR implicitly opens a cursor,
but I'm wondering if there would be any performance advantages to
explicitly declaring a cursor and moving through it with FETCH commands?
I have to use the ORDER BY, so I imagine I'm taking the hit of
processing all the records in the table anyway, regardless of how many I
ultimately fetch. The nature of the data is that chunksize doesn't
necessarily match up one-for-one with rows, so I can't use it as a LIMIT
value.
The table in question gets inserted pretty heavily, and my procedure
processes rows then deletes those it has processed. My main concern is
to keep the processing fairly smooth, i.e., not have it choke on a
select when the table gets huge.
Any suggestions appreciated!
- DAP
------------------------------------------------------------------------
----------
David Parker Tazz Networks (401) 709-5130
From | Date | Subject | |
---|---|---|---|
Next Message | Kris Jurka | 2005-05-17 19:05:46 | Re: PostgreSQL XA ? |
Previous Message | Scott Marlowe | 2005-05-17 18:02:57 | Re: Cursor not getting all rows |