From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Daniel Lagerman <spjheruur(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Impact of multixact "members" limit exceeded |
Date: | 2018-06-14 14:37:44 |
Message-ID: | 20180614143743.nerkcm3w4ajpydz6@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello Daniel
The advice from Lawrence downthread is very much on point. If you can
upgrade to 9.5, do it. He mentions savepoints, but one more case is
plpgsql blocks with EXCEPTION clauses.
On 2018-Jun-13, Daniel Lagerman wrote:
> Hello Álvaro,
>
> I'm running at 9.4.3, I know its an older version but upgrading it outside
> the scope right now for various reasons.
Okay. There's at least one related bug fix in 9.4.4 (b6a3444fa635) but
also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better
served by an upgrade anyway.
> Based on the settings I posted what would be your recommended settings
> right now to resolve this situation?
Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
tables fully) and vacuum_multixact_freeze_min_age to some value like one
million (so that they remove most of the oldest multixacts, leaving
just the frontmost one million). Then vacuum the tables with the
highest multixact ages. Make sure the datminmxid advances in all
databases (incl. postgres template0 template1); once it has advanced,
the oldest member files are removed. Don't waste time processing tables
with datminmxid higher than the minimum, as that won't free up any
member space.
> I very much appreciate the recommendation but I would also like to now the
> impact of this warning, data right now does not seem to be corrupted, get
> overwritten, for that matter not insert new data or cause issues when read.
Corruption should not occur because 9.4.3 already contains the member
wraparound protections (commit 3ecab37d97ed).
> It just seems that is has stopped creating "members" and I can't read into
> what problem that would cause, table was around 1.6 TB, it was cleaned up
> and the re-packed using pg-repack to go down to around 400 GB in size,
> however a freeze vacuum or autovacuum did not complete.
Uhh .. pg_repack has been known to cause catalog corruption. I don't
know if those bugs have been fixed. At 2ndQuadrant we recommend *not*
running pg_repack.
> Then when it comes to vacuum, do you think its worth increasing work_mem to
> say 200 GB,
work_mem does not affect vacuum. maintenance_work_mem is the one to
change; autovacuum_work_mem can be used for autovacuum.
> for the maintenance period only, and let autovacuum go to town
> on the tables with the highest age and lowest minmixid? Or should one run
> for example flexible freeze instead with zero pauses as nothing is
> happening on the weekends, which means no inbound transactions.
"Zero pauses" (vacuum_cost_delay=0) is best, because it's obviously much
faster.
> I have tried running autovacuum for 40 hours before with 5 GB work_mem, on
> the weekend, across 6 tables at the same time and it never completed on the
> bigger tables which are around 2 TB.
So what did you do, cancel it?
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tiffany Thang | 2018-06-14 14:48:05 | Database connection log |
Previous Message | Moreno Andreo | 2018-06-14 13:56:12 | catalog is missing n attribute(s) for relid xxxx at character yy |