From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Alexandre Arruda <adaldeia(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: ERROR: found multixact from before relminmxid |
Date: | 2018-04-06 16:11:10 |
Message-ID: | 0a057777-206e-b7d8-c078-4ca58678c6fd@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 04/06/2018 04:29 PM, Alexandre Arruda wrote:
> 2018-04-06 9:39 GMT-03:00 Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com
> <mailto:tomas(dot)vondra(at)2ndquadrant(dot)com>>:
>
>
>
> On 04/06/2018 02:09 AM, Alexandre Arruda wrote:
> > Hi,
> >
> > Some time ago, I had this errors frequently showed in logs after some
> > autovacuum in some tables(pg 9.6). VACUUM FULL or CLUSTER in this tables
> > show the same and not complete the tasks (showed by some table bloat
> > select).
> > Then, I did a full dump/restore into a new version (10.2) and everything
> > is ok for a couple of months. Now, I have errors like this again:
> >
> > db1=# cluster pc44t;
> >
> > ERROR: found multixact 134100944 from before relminmxid 192042633
> >
> >
> >
> > Like before, the only way to make the errors to desapear is by
> > dump/reload the whole table.
> >
> >
> > Thanks for any help.
> >
>
> That's going to be hard, unless you still have the cluster around.
>
> This surely seems like some sort of data corruption issue, but without
> being able to inspect the data pages it's nearly impossible to determine
> what went wrong.
>
> We'd also need more information about what happened to the hardware and
> cluster before the issues started to appear - crashes, hardware issues.
> And so on.
>
> regards
>
> --
> Tomas Vondra http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> Hi Tomas,
> The old cluster are gone, unfortunatly.
>
> This server is a 128GB ECC RAM with a dual redundant hba fiber channel
> connect to a sotorage with Raid 6 and I don't have (apparently) any
> errors reported.
> Yesterday I did a test with one table: some sum aggragates, count(*),
> etc, then dump/reload and repeat the tests the results (of querys) are
> the same, regarding the vacuum problem
> thats disapeared.
>
I'm not sure I understand correctly. So you can reproduce the issue? If
yes, how can you share the scripts/data you use (and how large is it)?
If we could reproduce it locally, it would make the investigation much
easier.
BTW you mentioned you're using PostgreSQL 9.6 - which minor version,
specifically?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Poty | 2018-04-06 19:09:35 | Re: dealing with lock |
Previous Message | Laurenz Albe | 2018-04-06 15:11:11 | Re: dealing with lock |