Re: Duplicate rows during pg_dump

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Marc Mamin <M(dot)Mamin(at)intershop(dot)de>, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, Chaz Yoon <chaz(at)shopspring(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Duplicate rows during pg_dump
Date: 2015-10-26 00:54:55
Message-ID: 562D79DF.9070905@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 10/24/15 3:15 PM, Marc Mamin wrote:
>>> Any suggestions for what to look for next? Is it table corruption?
> Most likely is the index corrupt, not the table.
> You should check for further duplicates, fix them and as Adrian writes,
> build a new index an then drop the corrupt one.
>
> I've seen this a few times before, and if I recall well it was always after some plate got full.
> Is AWS getting out of space:)

You should report this to the RDS team, because an out of space
condition shouldn't leave multiple values in the index. I suspect
they've made a modification somewhere that is causing this. It could be
a base Postgres bug, but I'd think we'd have caught such a bug by now...
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Blomstrom 2015-10-26 01:10:33 Re: Recursive Arrays 101
Previous Message Jim Nasby 2015-10-26 00:49:19 Re: partial JOIN (was: ID column naming convention)