From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Kynn Jones" <kynnjo(at)gmail(dot)com> |
Cc: | "pgsql-general General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: shared memory/max_locks_per_transaction error |
Date: | 2008-03-17 14:55:30 |
Message-ID: | 9485.1205765730@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Kynn Jones" <kynnjo(at)gmail(dot)com> writes:
> I'm leaning towards the re-design option, primarily because I really don't
> really understand the consequences of cranking up max_locks_per_transaction.
> E.g. Why is its default value 2^6, instead of, say, 2^15? In fact, why is
> there a ceiling on the number of locks at all?
Because the size of the lock table in shared memory has to be set at
postmaster start.
There are people running DBs with a couple hundred thousand tables,
but I don't know what sorts of performance problems they face when
they try to run pg_dump. I think most SQL experts would suggest
a redesign: if you have lots of essentially identical tables the
standard advice is to fold them all into one table with one more
key column.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | hogcia | 2008-03-17 14:58:25 | identify database process given client process |
Previous Message | Kynn Jones | 2008-03-17 14:30:27 | Re: shared memory/max_locks_per_transaction error |