From: | tfo(at)alumni(dot)brown(dot)edu (Thomas F(dot) O'Connell) |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | pg_dump out of shared memory |
Date: | 2004-06-17 21:34:08 |
Message-ID: | 80c38bb1.0406171334.4e0b5775@posting.google.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In using pg_dump to dump an existing postgres database, I get the
following:
pg_dump: WARNING: out of shared memory
pg_dump: attempt to lock table <table name> failed: ERROR: out of
shared memory
HINT: You may need to increase max_locks_per_transaction.
postgresql.conf just has the default of 1000 shared_buffers. The
database itself has thousands of tables, some of which have rows
numbering in the millions. Am I correct in thinking that, despite the
hint, it's more likely that I need to up the shared_buffers?
Or is it that pg_dump is an example of "clients that touch many
different tables in a single transaction" [from
http://www.postgresql.org/docs/7.4/static/runtime-config.html#RUNTIME-CONFIG-LOCKS]
and I actually ought to abide by the hint?
-tfo
From | Date | Subject | |
---|---|---|---|
Next Message | Rory Campbell-Lange | 2004-06-17 21:34:32 | [OT] Dilemma about OS <-> Postgres interaction |
Previous Message | Ron Snyder | 2004-06-17 20:04:22 | putting binary data in a char field? |