From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | wetter wetterana <wetterana(at)gmail(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: "ERROR: could not read block 4459549 in file "base/16384/16956.34": Result too large" |
Date: | 2014-12-29 19:31:49 |
Message-ID: | 1459323943.1433683.1419881509980.JavaMail.yahoo@jws100206.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
wetter wetterana <wetterana(at)gmail(dot)com> wrote:
> Re: [GENERAL] "ERROR: could not read block 4459549 in file "base/16384/16956.34": Result too large"
> I got a huge database which I am populating in batches. One of the
> tables seemed to got 'corrupted.' I cannot query it anymore. I'm
> pretty sure that I could identify the batch of rows where the
> mistake must be, so if I could somehow revert it to an earlier
> state or temporary query it, I could try to delete the last batch
> of records I added, which might solve the problem.
>
> I already tried to add "zero_damaged_pages = on" to the
> postgresql.conf as suggested in another post here, but even then I
> wasn't able to query the table.
zero_damaged_pages won't help if you don't read the page. The file
should not have that many pages, and apparently doesn't, so I would
suspect a bad index. Try REINDEX TABLE on the problem table. This
will not share the table with any other activity, and may run for a
while so if you are at least somewhat functional and don't want to
block all access to the table for the duration of the builds you
can CREATE INDEX CONCURRENTLY on each index, drop the old index,
and rename. Primary keys can be a particular bother this way,
though.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2014-12-29 21:24:57 | Re: ON_ERROR_ROLLBACK |
Previous Message | Tom Lane | 2014-12-29 19:08:44 | Re: 9.3.6 release? |