From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-patches(at)postgresql(dot)org |
Subject: | Re: pg_dump additional options for performance |
Date: | 2008-07-27 16:57:17 |
Message-ID: | 488CA8ED.6080908@commandprompt.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-patches |
Simon Riggs wrote:
> On Sat, 2008-07-26 at 11:03 -0700, Joshua D. Drake wrote:
>
>> 2. We have no concurrency which means, anyone with any database over 50G
>> has unacceptable restore times.
>
> Agreed.
> Sounds good.
>
> Doesn't help with the main element of dump time: one table at a time to
> one output file. We need a way to dump multiple tables concurrently,
> ending in multiple files/filesystems.
Agreed but that is a problem I understand with a solution I don't. I am
all eyes on a way to fix that. One thought I had and please, be gentle
in response was some sort of async transaction capability. I know that
libpq has the ability to send async queries. Is it possible to do this:
send async(copy table to foo)
send async(copy table to bar)
send async(copy table to baz)
Where all three copies are happening in the background?
Sincerely,
Joshua D. Drake
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen R. van den Berg | 2008-07-27 19:00:04 | Re: Protocol 3, Execute, maxrows to return, impact? |
Previous Message | Simon Riggs | 2008-07-27 09:37:34 | Re: pg_dump additional options for performance |
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2008-07-27 23:42:24 | Re: pg_dump additional options for performance |
Previous Message | Simon Riggs | 2008-07-27 09:37:34 | Re: pg_dump additional options for performance |