From: | "Gabriel E(dot) Sánchez Martínez" <gabrielesanchez(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tv(at)fuzzy(dot)cz>, Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: invalid memory alloc request size |
Date: | 2014-12-16 20:49:10 |
Message-ID: | 54909AC6.90602@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12/10/2014 01:48 PM, Tomas Vondra wrote:
> On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote:
>> Hi all,
>>
>> I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
>> of RAM. When running pg_dump on a specific table, I get the following
>> error:
>>
>> pg_dump: Dumping the contents of table "x_20131111" failed:
>> PQgetResult() failed.
>> pg_dump: Error message from server: ERROR: invalid memory alloc request
>> size 18446744073709551613
>> pg_dump: The command was: COPY public.x_20131111 (...) TO stdout;
>> pg_dump: [parallel archiver] a worker process died unexpectedly
>>
>> If I run a COPY TO file from psql I get the same error.
>>
>> Is this an indication of corrupted data? What steps should I take?
> In my experience, issues like this are caused by a corrupted varlena
> header (i.e. corruption in text/varchar/... columns).
>
> How exactly that corruption happened is difficult to say - it might be a
> faulty hardware (RAM, controller, storage), it might be a bug (e.g.
> piece of memory gets overwritten by random data). Or it might be a
> consequence of incorrect hardware configuration (e.g. leaving the
> on-disk write cache enabled).
>
> If you have a backup of the data, use that instead of recovering the
> data from the current database - it's faster and safer.
>
> However, it might be worth spending some time analyzing the corruption
> to identify the cause, so that you can prevent it next time.
>
> The are tools that might help you with that - "pageinspect" extension is
> a way to look at the data files on a low-level. It may be quite tedious,
> though, and it may not work with badly broken data.
>
> Another option is "pg_check" - an extension I wrote a few years back. It
> analyzes the data file and prints info on all corruption occurences.
> It's available at https://github.com/tvondra/pg_check and I just pushed
> some minor fixes to make it 9.3-compatible.
Thanks for providing and updating the extension. I used pg_check and
got the following messages:
WARNING: [104112:52] tuple has too many attributes. 150 found, 33 expected
WARNING: [104112] is probably corrupted, there were 1 errors reported
The table has 33 columns.
Running with DEBUG3 message levels, I get the following:
DEBUG: [104112:52] tuple is LP_NORMAL
DEBUG: [104112:52] checking attributes for the tuple
WARNING: [104112:52] tuple has too many attributes. 150 found, 33 expected
I assume this was disk data corruption. Is there anything I should do
to investigate further? At this point the table has been restored from
a backup, so I could drop the corrupted version of the table, which
would allow me to do pg_dumps of the whole database without memory errors.
>
> regards
> Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | harpagornis | 2014-12-16 21:52:18 | Re: SSL Certificates in Windows 7 & Postgres 9.3 |
Previous Message | David G Johnston | 2014-12-16 20:41:02 | Re: SSL Certificates in Windows 7 & Postgres 9.3 |