From: | Shanker Singh <ssingh(at)iii(dot)com> |
---|---|
To: | Sterfield <sterfield(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "rod(at)iol(dot)ie" <rod(at)iol(dot)ie>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, Shanker Singh <ssingh(at)iii(dot)com> |
Subject: | Re: parallel dump fails to dump large tables |
Date: | 2015-02-23 16:22:48 |
Message-ID: | 961471F4049EF94EAD4D0165318BD88162590740@Corp-MBXE3.iii.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I did setup the keepalive option for ssh but still the pg_dump fails on the table of size 48GB. It was able to dump table of size 34Gb(dump file size 2Gb) but fails on table of size
48Gb(partial dump file size 3.9Gb). Is there any limit on the size of the dump file in parallel load?
Thanks
shanker
From: Sterfield [mailto:sterfield(at)gmail(dot)com]
Sent: Sunday, February 22, 2015 8:50 AM
To: Shanker Singh
Cc: Tom Lane; rod(at)iol(dot)ie; pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] parallel dump fails to dump large tables
2015-02-20 14:26 GMT-08:00 Shanker Singh <ssingh(at)iii(dot)com<mailto:ssingh(at)iii(dot)com>>:
I tried turning off ssl renegotiation by setting "ssl_renegotiation_limit = 0" in postgresql.conf but it had no effect. The parallel dump still fails on large tables consistently.
Thanks
Shanker
HI,
Maybe you could try to setup an SSH connection between the two servers, with a keepalive option, and left it open for a long time (at least the duration of your backup), just to test if your ssh connection is still being cut after some time.
That way, you will be sure if the problem is related to SSH or related to Postgresql.
Thanks,
Guillaume
-----Original Message-----
From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us<mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us>]
Sent: Saturday, February 14, 2015 9:00 AM
To: rod(at)iol(dot)ie<mailto:rod(at)iol(dot)ie>
Cc: Shanker Singh; pgsql-general(at)postgresql(dot)org<mailto:pgsql-general(at)postgresql(dot)org>
Subject: Re: [GENERAL] parallel dump fails to dump large tables
"Raymond O'Donnell" <rod(at)iol(dot)ie<mailto:rod(at)iol(dot)ie>> writes:
> On 14/02/2015 15:42, Shanker Singh wrote:
>> Hi,
>> I am having problem using parallel pg_dump feature in postgres
>> release 9.4. The size of the table is large(54GB). The dump fails
>> with the
>> error: "pg_dump: [parallel archiver] a worker process died
>> unexpectedly". After this error the pg_dump aborts. The error log
>> file gets the following message:
>>
>> 2015-02-09 15:22:04 PST [8636]: [2-1]
>> user=pdroot,db=iii,appname=pg_dump
>> STATEMENT: COPY iiirecord.varfield (id, field_type_tag, marc_tag,
>> marc_ind1, marc_ind2, field_content, field_group_id, occ_num,
>> record_id) TO stdout;
>> 2015-02-09 15:22:04 PST [8636]: [3-1]
>> user=pdroot,db=iii,appname=pg_dump
>> FATAL: connection to client lost
> There's your problem - something went wrong with the network.
I'm wondering about SSL renegotiation failures as a possible cause of the disconnect --- that would explain why it only happens on large tables.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org<mailto:pgsql-general(at)postgresql(dot)org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | Igor Stassiy | 2015-02-23 16:32:14 | |
Previous Message | Adrian Klaver | 2015-02-23 16:19:40 | Re: : Getting error while starting the server |