From: | Birgit Laggner <birgit(dot)laggner(at)vti(dot)bund(dot)de> |
---|---|
To: | Merlin Moncure <mmoncure(at)gmail(dot)com>, Alban Hertroys <dalroi(at)solfertje(dot)student(dot)utwente(dot)nl> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: transaction control in pl/pgsql |
Date: | 2010-04-12 13:09:04 |
Message-ID: | 4BC31B70.4080606@vti.bund.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Merlin, hi Alban,
thank you both for your helpful answers. Now, I splitted the function
into smaller parts which have to be called seperately one after another.
Probably, I will write a script for calling all the functions needed.
Not as nice as an all in one function, but if there is no other way....
Executing my function snippets I came down to some possible explanation
of the memory overflow: The function runs som loop cycles and for every
cycle, I let the function write a notice of the current loop cycle
number. I use pgadmin for writing and executing most of my postgres
stuff. So, I guess, writing all these loop cycle notices in the pgadmin
window lead to the exorbitant memory usage. Now, I let the function
write a notice only every 100 loop cycles. Until now, I didn't have
problems with memory overflow anymore...
Regards,
Birgit.
On 01.04.2010 22:33, Merlin Moncure wrote:
> On Thu, Apr 1, 2010 at 6:22 AM, Birgit Laggner
> <birgit(dot)laggner(at)vti(dot)bund(dot)de> wrote:
>
>> Dear list,
>>
>> I have some data (big size) and I've written a long function in pl/pgsql
>> which processes the data in several steps. At a test run my function
>> aborted because of memory exhaustion. My guess is, that everything what
>> happens during the function transaction is stored in the memory - until
>> it's full... So, my idea for solving that problem would be to cut the
>> big function into smaller functions. But, I don't want to write 30
>> function calls at the end - I would rather like to have one function
>> which is calling all these small functions, so I would only have to
>> write one sql-query at the end. What I fear is either, that, if this
>> function calls the other functions, everything is only one trancaction
>> again and I get memory overflow once more.
>>
> I don't know all the specifics of your case but ultimately there are
> limits to what you can reasonably do in a single transaction,
> especially if you are writing to the database. If you push the limit
> the database starts to push back. Transactions generally should be as
> short as possible. Long transactions inhibit the ability of the
> database to do certain types of maintenance on itself and have other
> issues like bad performance and memory exhaustion.
>
> Regardless, of how many separate functions/savepoints/begin/end blocks
> your 'outer' function calls, your entire set of work is going to
> operate within the context of a single transaction. This is an iron
> clad rule which (at present) there is no work around for. For this
> reason certain classes of data processing must unhappily be done on
> the client side, introducing another language and forcing all the data
> back and forth through the protocol.
>
> In the future, it may be possible to execute pl/pgsql-ish type of code
> in the backend that allows explicit transaction control. This feature
> might be a 'stored procedure', or there might be some other type of
> nomenclature to distinguish functions that manage their own
> transaction state.
>
> merlin
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Jorge Arevalo | 2010-04-12 18:12:38 | Showing debug messages in my C function |
Previous Message | Craig Ringer | 2010-04-12 11:01:34 | Re: File Handling in pgsql |