From: | David Johnston <polobo(at)yahoo(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Wiki Page Draft for upcoming release |
Date: | 2014-03-18 01:59:35 |
Message-ID: | 1395107975236-5796503.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I sent a post to -general with a much more detailed brain dump of my current
understanding on this topic. The main point I'm addressing here is how to
recover from this problem.
Since a symptom of the problem is that pg_dump/restore can fail saying that
(in some instances) the only viable restore mechanism be pg_dump/restore
means that someone so afflicted is going to lose data since their last good
dump - if they still have one.
However, if the true data table does not actually contain any duplicate data
then such a dump/restore cycle (or I would think REINDEX - or DROP/CREATE
INDEX chain) should resolve the problem. Thus if there is duplicate data
the user needs-to/can identify and remove the offending records so that
subsequent actions do not fail with a duplicate key error.
If this is true then providing a query (or queries) that can provide the
problem records and delete them from the table - along with any staging up
that is necessary (like first dropping affected indexes if applicable) -
would be good a nice addition.
David J.
--
View this message in context: http://postgresql.1045698.n5.nabble.com/Wiki-Page-Draft-for-upcoming-release-tp5796494p5796503.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2014-03-18 02:00:29 | Re: Patch: show relation and tuple infos of a lock to acquire |
Previous Message | Stephen Frost | 2014-03-18 01:43:55 | Re: GSoC 2014 - mentors, students and admins |