From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tom Wilcox <hungrytom(at)googlemail(dot)com> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: Out of memory |
Date: | 2010-05-14 21:45:52 |
Message-ID: | 11154.1273873552@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Tom Wilcox <hungrytom(at)googlemail(dot)com> writes:
> I am getting this error consistently after running for around 400s:
> ERROR: out of shared memory
> SQL state: 53200
> Hint: You might need to increase max_locks_per_transaction.
> Context: SQL statement "CREATE TABLE tmp_cands AS SELECT d.premise_id AS uid, d.* FROM tmp_cands_1 AS c INNER JOIN nlpg.match_data AS d ON c.pid = d.premise_id"
> PL/pgSQL function "get_match" line 61 at SQL statement
If you're doing this in a loop:
> -- Get candidate sets
> DROP TABLE IF EXISTS tmp_cands_n;
> CREATE TABLE tmp_cands_n AS SELECT nlpg.get_namenum_cands(($1).name) AS pid;
> DROP TABLE IF EXISTS tmp_cands_s;
> CREATE TABLE tmp_cands_s AS SELECT nlpg.get_street_100_cands(($1).street) AS pid;
> DROP TABLE IF EXISTS tmp_cands_p;
> CREATE TABLE tmp_cands_p AS SELECT nlpg.get_pc_cands(($1).pc) AS pid;
... you're eventually going to run out of lock space. The reason is
that each incarnation of each table is a logically independent table and
requires an independent lock to be held till the end of the transaction.
They don't just go away entirely when you DROP them (because DROP can
be rolled back).
Instead of the above you should consider using DELETE or TRUNCATE to
clear the table at each iteration.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Jarvis | 2010-05-15 01:47:52 | Full table scan: 300 million rows |
Previous Message | Mladen Gogala | 2010-05-14 19:22:48 | Re: Encoding issue with pgAdmin |