From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | Peter <peter(at)greatnowhere(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Transaction eating up all RAM |
Date: | 2006-03-13 18:19:47 |
Message-ID: | b42b73150603131019o6e8cf4esb79dfb69d1accddb@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
> >> I have stored proc that retrieves a bunch of data, stores it in temp =
> >> table, computes all sorts of totals/averages/whatnots from the temp =
> >> table, and inserts results in another table. It works fine (except I =
> >> don't like wrapping all SQL statements in 'execute'), but multiple calls
> >> =
> >> to that proc from another procedure causes excessive memory usage =
> >> (upwards of 400M), and server eventually runs out of swap space. I =
> >> believe this is because PG caches transactions in RAM, and this =
> >> particular one is a bit too big.=20
is that multiple simultaneous calls? maybe you are over committing
your sort memory. If you can reproduce the out of memory behavior
from a single backend that argues for a memory leak.
p.s. you can create one function temp_tables_init(), called after
connection to backend (and not in a transaction) which creates all
temp tables for the process. If you do that and remember to truncate
the tables (not drop), you can use non-dynamic pl/pgsql calls.
Merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2006-03-13 18:35:43 | Re: Table locks and serializable transactions. |
Previous Message | Merlin Moncure | 2006-03-13 18:14:00 | Re: in Pl/PgSQL, do commit every 5000 records |
From | Date | Subject | |
---|---|---|---|
Next Message | Jonah H. Harris | 2006-03-13 18:36:28 | Re: Restoring a Full Cluster on a Different Architecture (32 x 64) |
Previous Message | Rodrigo Hjort | 2006-03-13 17:56:00 | Restoring a Full Cluster on a Different Architecture (32 x 64) |