From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Martijn van Oosterhout <kleptog(at)cupid(dot)suninternet(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: disk backups |
Date: | 2000-06-30 16:21:10 |
Message-ID: | 19224.962382070@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Martijn van Oosterhout <kleptog(at)cupid(dot)suninternet(dot)com> writes:
> Tom Lane wrote:
>> pg_dump shouldn't be a performance hog if you are using the default
>> COPY-based style of data export. I'd only expect memory problems
>> if you are using INSERT-based export (-d or -D switch to pg_dump).
> Aha! Thanks for that! Last time I asked here nobody answered...
> So it only happens with an INSERT based export, didn't know
> that (though I can't see why there would be a difference...)
COPY uses a streaming style of output. To generate INSERT commands,
pg_dump first does a "SELECT * FROM table", and that runs into libpq's
suck-the-whole-result-set-into-memory behavior. See nearby thread
titled "Large Tables(>1 Gb)".
> Yes, we are using -D, mainly because we've had "issues" with
> the COPY based export, ie, it won't read the resulting file
> back. Admittedly this was a while ago now and I havn't checked
> since.
IIRC that's a long-since-fixed bug. If not, file a bug report so
we can fix whatever's still wrong...
> I was thinking to write my own version of pg_dump that would
> do that but also allow specifying of ordering constraint, ie,
> clustering. Maybe it would be better to just switch to the
> other output format...
Philip Warner needs alpha testers for his new version of pg_dump ;-).
Unfortunately I think he's only been talking about it on pghackers
so far.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | caseman | 2000-06-30 16:38:33 | Inheritance of Ref Integ |
Previous Message | Peter Eisentraut | 2000-06-30 16:15:47 | Re: Comments with embedded single quotes |