Re: Remove duplicated row in pg_largeobject_metadata

From: Tobias Meyer <t9m(at)qad(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Remove duplicated row in pg_largeobject_metadata
Date: 2021-09-18 19:00:18
Message-ID: CAAEpUZm7tExX6Q88SN=KQ2R9P0QE3SOVd2=2WHy3DXHe9Hpgzw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>
> Yipes. Did you verify that the TIDs are all distinct?
>
yes, they were.

> A possible theory is that pg_largeobject_metadata_oid_index has been
> corrupt for a long time, allowing a lot of duplicate entries to be made.
> However, unless pg_largeobject's pg_largeobject_loid_pn_index is *also*
> corrupt, you'd think that creation of such duplicates would still be
> stopped by that unique index. There's something mighty odd here.
>

That would be a mildly disturbing thought indeed.
Any way to quickly check that without reindexing?

But if those duplicates were inserted during normal operation, that would
mean there had been an OID overflow, correct?
And that would also mean we would have to be referencing the same OID in
more than one place (different LOs actually), which I could not see in the
other tables. Did not check that for all 2.9 million though.

Let me roll back the test instance to before the first vacuumlo run and
verify if the index was OK before - will only get to do that on monday
though.

Kind regards,
Tobias

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Niels Jespersen 2021-09-19 10:28:48 Possibilities for optimizing inserts across oracle_fdw foreign data wrapper
Previous Message Tom Lane 2021-09-18 18:28:11 Re: Remove duplicated row in pg_largeobject_metadata