pg_dump with lots and lots of tables

From: Andy Colson <andy(at)squeakycode(dot)net>
To: PostgreSQL general <pgsql-general(at)postgresql(dot)org>
Subject: pg_dump with lots and lots of tables
Date: 2013-11-02 16:22:02
Message-ID: 527526AA.6040304@squeakycode.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


pg_dump is upset that my max_locks_per_transaction is too low. I've bumped it up several times (up to 600 so far) but now sure how many it needs.

I'm merging 90 databases into a single database with 90 schemas. Each schema can have 500'ish tables. Do I need to set max_locks_per_transaction to (90*500) 45,000? Will that even work?

Will I ever need to bump up sysctl kernel.shmmax?

Oh, I'm on Slackware 64, PG 9.3.1. I'm trying to get my db from the test box back to the live box. For regular backup I think I'll be switching to streaming replication.

Thanks for your time,

-Andy

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Patrick Dung 2013-11-02 16:49:51 Curious question about physical files to store database
Previous Message Rowan Collins 2013-11-02 15:11:28 Re: changing port numbers so pgbouncer can read geoserver and postgres