From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Kynn Jones" <kynnjo(at)gmail(dot)com> |
Cc: | "pgsql-general General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: shared memory/max_locks_per_transaction error |
Date: | 2008-03-14 23:12:11 |
Message-ID: | 27493.1205536331@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Kynn Jones" <kynnjo(at)gmail(dot)com> writes:
> Initially I didn't know what our max_locks_per_transaction was (nor even a
> typical value for it), but in light of the procedure's failure after 3500
> iterations, I figured that it was 3500 or so. In fact ours is only 64 (the
> default), so I'm now thoroughly confused.
The number of lock slots available system-wide is
max_locks_per_transaction times max_connections, and your procedure was
chewing them all. I suggest taking the hint's advice if you really need
to create 3500 tables in a single transaction. Actually, you'd better
do it if you want to have 3500 tables at all, because pg_dump will
certainly try to acquire AccessShare lock on all of them.
> Is there a way to force the release of locks within the loop?
No.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Russell Smith | 2008-03-15 02:21:33 | Re: postgre vs MySQL |
Previous Message | Clodoaldo | 2008-03-14 22:58:32 | Re: Reindex does not finish 8.2.6 |