| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Bob Lunney <bob_lunney(at)yahoo(dot)com> |
| Cc: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
| Subject: | Re: Parallel pg_dump on a single database |
| Date: | 2011-07-01 19:09:50 |
| Message-ID: | 2813.1309547390@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
Bob Lunney <bob_lunney(at)yahoo(dot)com> writes:
> Is it possible (or smart!) to run multiple pg_dumps simulataneously on a single database, dumping different parts of the database to different files by using table and schema exclusion? I'm attempting this and sometimes it works and sometimes when I check the dump files with
> pg_restore -Fc <dumpfile> > /dev/null
> I get
> pg_restore: [custom archiver] found unexpected block ID (4) when reading data -- expected 4238
That sure sounds like a bug. What PG version are you using exactly?
Can you provide a more specific description of what you're doing,
so somebody else could reproduce this?
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Bob Lunney | 2011-07-01 19:29:52 | Re: Parallel pg_dump on a single database |
| Previous Message | Achilleas Mantzios | 2011-07-01 14:21:52 | Re: Help compiling --with-ldap on Solaris 11 Express? |