From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | tfo(at)alumni(dot)brown(dot)edu (Thomas F(dot) O'Connell) |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: pg_dump out of shared memory |
Date: | 2004-06-27 22:40:44 |
Message-ID: | 14513.1088376044@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
tfo(at)alumni(dot)brown(dot)edu (Thomas F. O'Connell) writes:
> Now I'm curious: why does pg_dump require that
> max_connections * max_shared_locks_per_transaction be greater than the
> number of objects in the database?
Not objects, just tables. pg_dump takes AccessShareLock (the weakest
kind of lock) on each table it intends to dump. This is basically
just to prevent someone from dropping the table underneath it. (It
would actually have to take that lock anyway as a byproduct of reading
the table contents, but we grab the locks ASAP during pg_dump startup
to reduce the risks of problems from concurrent drops.)
On a database with thousands of tables, this could easily require more
locks than the default lock table size can hold. Most normal apps don't
need more than a few tables locked within any one transaction, which is
why the table size is calculated as a multiple of max_connections.
There's a great deal of slop involved, because we pad the shared memory
size by 100K or so which is room for quite a few more lock entries than
the nominal table size ... but eventually you'll run out of room.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-06-27 22:48:44 | Re: ERROR: tables can have at most 1600 columns |
Previous Message | Jonathan Raemdonck | 2004-06-27 22:40:32 | indexing lat lon |