From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | rod(at)iol(dot)ie |
Cc: | Shanker Singh <ssingh(at)iii(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: parallel dump fails to dump large tables |
Date: | 2015-02-14 16:59:35 |
Message-ID: | 28949.1423933175@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Raymond O'Donnell" <rod(at)iol(dot)ie> writes:
> On 14/02/2015 15:42, Shanker Singh wrote:
>> Hi,
>> I am having problem using parallel pg_dump feature in postgres release
>> 9.4. The size of the table is large(54GB). The dump fails with the
>> error: "pg_dump: [parallel archiver] a worker process died
>> unexpectedly". After this error the pg_dump aborts. The error log file
>> gets the following message:
>>
>> 2015-02-09 15:22:04 PST [8636]: [2-1] user=pdroot,db=iii,appname=pg_dump
>> STATEMENT: COPY iiirecord.varfield (id, field_type_tag, marc_tag,
>> marc_ind1, marc_ind2, field_content, field_group_id, occ_num, record_id)
>> TO stdout;
>> 2015-02-09 15:22:04 PST [8636]: [3-1] user=pdroot,db=iii,appname=pg_dump
>> FATAL: connection to client lost
> There's your problem - something went wrong with the network.
I'm wondering about SSL renegotiation failures as a possible cause of
the disconnect --- that would explain why it only happens on large tables.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2015-02-14 17:22:58 | Re: dbmsscheduler |
Previous Message | Ramesh T | 2015-02-14 16:58:36 | Re: postgres cust types |