From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> |
Cc: | "Kirby Bohling (TRSi)" <kbohling(at)oasis(dot)novia(dot)net>, pgsql-interfaces(at)postgresql(dot)org |
Subject: | Re: cursor interface to libpq |
Date: | 2000-09-22 16:31:57 |
Message-ID: | 29420.969640317@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> writes:
> afaik this should all work. You can run pg_dump and pipe the output to a
> tape drive or to gzip. You *know* that a real backup will take something
> like the size of the database (maybe a factor of two or so less) since
> the data has to go somewhere.
pg_dump in default mode (ie, dump data as COPY commands) doesn't have a
problem with huge tables because the COPY data is just dumped out in a
streaming fashion.
If you insist on using the "dump data as insert commands" option then
huge tables cause a memory problem in pg_dump, but on the other hand you
are going to get pretty tired of waiting for such a script to reload,
too. I recommend just using the default behavior ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Haas | 2000-09-22 20:00:49 | Re: upgrade pgaccess v0.96 to v0.98 fatal error: attribute querytables not found |
Previous Message | Cedar Cox | 2000-09-22 12:22:57 | ODBC - invalid protocol character |