From: | Shanker Singh <ssingh(at)iii(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Cc: | Shanker Singh <ssingh(at)iii(dot)com> |
Subject: | parallel dump fails to dump large tables |
Date: | 2015-02-14 15:42:24 |
Message-ID: | 961471F4049EF94EAD4D0165318BD88162590243@Corp-MBXE3.iii.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I am having problem using parallel pg_dump feature in postgres release 9.4. The size of the table is large(54GB). The dump fails with the error: "pg_dump: [parallel archiver] a worker process died unexpectedly". After this error the pg_dump aborts. The error log file gets the following message:
2015-02-09 15:22:04 PST [8636]: [2-1] user=pdroot,db=iii,appname=pg_dump STATEMENT: COPY iiirecord.varfield (id, field_type_tag, marc_tag, marc_ind1, marc_ind2, field_content, field_group_id, occ_num, record_id) TO stdout;
2015-02-09 15:22:04 PST [8636]: [3-1] user=pdroot,db=iii,appname=pg_dump FATAL: connection to client lost
2015-02-09 15:22:04 PST [8636]: [4-1] user=pdroot,db=iii,appname=pg_dump STATEMENT: COPY iiirecord.varfield (id, field_type_tag, marc_tag, marc_ind1, marc_ind2, field_content, field_group_id, occ_num, record_id) TO stdout;
Is there any config parameter that I need to set to use parallel dump for large tables.
thanks
shasingh
From | Date | Subject | |
---|---|---|---|
Next Message | Ramesh T | 2015-02-14 16:22:50 | Re: dbmsscheduler |
Previous Message | Tom Lane | 2015-02-14 15:41:42 | Re: increasing varchar column size is taking too much time |