Re: pg_dump with lots and lots of tables

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andy Colson <andy(at)squeakycode(dot)net>
Cc: PostgreSQL general <pgsql-general(at)postgresql(dot)org>
Subject: Re: pg_dump with lots and lots of tables
Date: 2013-11-02 17:15:37
Message-ID: 11774.1383412537@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Andy Colson <andy(at)squeakycode(dot)net> writes:
> pg_dump is upset that my max_locks_per_transaction is too low. I've bumped it up several times (up to 600 so far) but now sure how many it needs.

> I'm merging 90 databases into a single database with 90 schemas. Each schema can have 500'ish tables. Do I need to set max_locks_per_transaction to (90*500) 45,000? Will that even work?

The pg_dump will need about 45000 locks altogether, so anything north of
45000/max_connections should work (more if you have other sessions going
on at the same time).

Basically the lock table is sized at max_locks_per_transaction*max_connections,
and transactions can use as many entries as they want --- there's no
attempt to hold a session to its "fair share" of the table. The parameter
is only defined as it is to ensure that if you bump up max_connections the
lock table will get bigger automatically, so you won't starve sessions of
locks accidentally.

> Will I ever need to bump up sysctl kernel.shmmax?

If the postmaster fails to start with the larger setting, then yes.
But lock entries aren't that large so probably it won't matter.
If it does matter, and increasing shmmax is inconvenient, you could
back off shared_buffers to make room.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Grzegorz Tańczyk 2013-11-02 18:03:49 Memory usage per postmaster process
Previous Message John R Pierce 2013-11-02 17:11:49 Re: changing port numbers so pgbouncer can read geoserver and postgres