From: | Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Ken Caruso <ken(at)ipl31(dot)net>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: 9.0.4 Data corruption issue |
Date: | 2011-07-20 13:32:43 |
Message-ID: | CAF6yO=3aA+-S08pGmpjOjN2Q+NqJ0x6D5Jre76tYK4pdv8RYHQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
2011/7/20 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> Ken Caruso <ken(at)ipl31(dot)net> writes:
>> On Sun, Jul 17, 2011 at 3:04 AM, Cédric Villemain <
>> cedric(dot)villemain(dot)debian(at)gmail(dot)com> wrote:
>>>> Block number 12125253 is bigger that any block we can find in
>>>> base/2651908/652397108.1
>
>>> Should the table size be in the 100GB range or 2-3 GB range ?
>
>> The DB was in the 200GB-300GB range when this happened.
>
> Cédric was asking about the particular table's size, not the whole DB...
> the table in question is the one with relfilenode = 652397108.
Yes, the error origin is probably due to faillure outside PostgreSQL.
It should not happen but... well I was wondering how a table can be,
by error, partially truncated (truncation of segment, not the sql
TRUNCATE) by PostgreSQL...
--
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation
From | Date | Subject | |
---|---|---|---|
Next Message | pasman pasmański | 2011-07-20 14:31:45 | Re: Problem retrieving large records (bytea) data from a table |
Previous Message | Tom Lane | 2011-07-20 03:12:26 | Re: Bloat and Slow Vacuum Time on Toast |