From: | Brian Karlak <zenkat(at)metaweb(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: table partitioning & max_locks_per_transaction |
Date: | 2009-10-11 16:44:31 |
Message-ID: | 277C854D-63F2-4353-9756-E151CFD2CB90@metaweb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom --
Thanks for the pointers and advice. We've started by doubling
max_locks and halving shared_buffers, we'll see how it goes.
Brian
On Oct 10, 2009, at 7:56 PM, Tom Lane wrote:
> Brian Karlak <zenkat(at)metaweb(dot)com> writes:
>> "out of shared memory HINT: You might need to increase
>> max_locks_per_transaction"
>
> You want to do what it says ...
>
>> 1) We've already tuned postgres to use ~2BG of shared memory -- which
>> is SHMAX for our kernel. If I try to increase
>> max_locks_per_transaction, postgres will not start because our shared
>> memory is exceeding SHMAX. How can I increase
>> max_locks_per_transaction without having my shared memory
>> requirements
>> increase?
>
> Back off shared_buffers a bit? 2GB is certainly more than enough
> to run Postgres in.
>
>> 2) Why do I need locks for all of my subtables, anyways? I have
>> constraint_exclusion on. The query planner tells me that I am only
>> using three tables for the queries that are failing. Why are all of
>> the locks getting allocated?
>
> Because the planner has to look at all the subtables and make sure
> that they in fact don't match the query. So it takes AccessShareLock
> on each one, which is the minimum strength lock needed to be sure that
> the table definition isn't changing underneath you. Without *some*
> lock
> it's not really safe to examine the table at all.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Nimesh Satam | 2009-10-12 08:47:15 | Using unnest function on multi-dimensional array. |
Previous Message | Heikki Linnakangas | 2009-10-11 15:59:37 | Re: vacuumdb command |