From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Maciek Sakrejda <m(dot)sakrejda(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6 |
Date: | 2012-10-17 14:18:46 |
Message-ID: | 19833.1350483526@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Maciek Sakrejda <m(dot)sakrejda(at)gmail(dot)com> writes:
> We've run into a perplexing issue with a customer database. He moved
> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB
> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is
> now regularly getting constant errors regarding running out of shared
> memory (there were none on the old system in the recent couple of
> days' logs from before the upgrade):
> ERROR: out of shared memory
> HINT: You might need to increase max_pred_locks_per_transaction.
This has nothing to do with work_mem nor maintenance_work_mem; rather,
it means you're running out of space in the database-wide lock table.
You need to take the hint's advice.
> The query causing this has structurally identical plans on both systems:
> old: http://explain.depesz.com/s/Epzq
> new: http://explain.depesz.com/s/WZo
The query in itself doesn't seem very exceptional. I wonder whether
you recently switched your application to use serializable mode? But
anyway, a query's demand for predicate locks can depend on a lot of
not-very-visible factors, such as how many physical pages the tuples
it accesses are spread across. I don't find it too hard to credit
that yesterday you were just under the limit and today you're just
over even though "nothing changed".
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Maciek Sakrejda | 2012-10-17 15:09:11 | Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6 |
Previous Message | Niko Kiirala | 2012-10-17 13:52:15 | High cost estimates when n_distinct is set |