From: | Bryan Murphy <bmurphy1976(at)gmail(dot)com> |
---|---|
To: | Magnus Hagander <magnus(at)hagander(dot)net> |
Cc: | PGSQL Mailing List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: missing chunk number 0 for toast value 25693266 in pg_toast_25497233 |
Date: | 2010-05-07 14:53:13 |
Message-ID: | k2n7fd310d11005070753w1348c45ds9d24dafd1a351f42@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, May 7, 2010 at 9:02 AM, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
> Try doing a binary search with LIMIT. E.g., if you have 20M reecords,
> do a SELECT * FROM ... LIMIT 10M. (throw away the results) If that
> broke, check the upper half, if not, check the lower one (with
> OFFSET).
>
> If you have a serial primary key or something, you can use WHERE on it
> which will likely be a lot faster than using LIMIT, but the same idea
> applies - do a binary search. Should take a lot less than days, and is
> reasonably easy to script.
That's my next step if I can't find a quicker/simpler method. My math
tells me that my current script is going to take 24 days to test every
record. Obviously, there are ways I can speed that up if I have no
choice but I'm hoping for a simpler solution.
I'd prefer to run a COPY TABLE like command and have it skip the bad records.
Thanks,
Bryan
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Sergio Borgonovo | 2010-05-07 15:07:16 | Re: unable to avoid a deadlock at the end of a long transaction |
Previous Message | Tom Lane | 2010-05-07 14:29:20 | Re: unable to avoid a deadlock at the end of a long transaction |