From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Marko Kreen <marko(at)l-t(dot)ee> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Restoring large tables with COPY |
Date: | 2001-12-11 15:55:30 |
Message-ID: | 17277.1008086130@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Marko Kreen <marko(at)l-t(dot)ee> writes:
> Maybe I am missing something obvious, but I am unable to load
> larger tables (~300k rows) with COPY command that pg_dump by
> default produces.
I'd like to find out what the problem is, rather than work around it
with such an ugly hack.
> 1) Too few WAL files.
> - well, increase the wal_files (eg to 32),
What PG version are you running? 7.1.3 or later should not have a
problem with WAL file growth.
> 2) Machine runs out of swap, PostgreSQL seems to keep whole TX
> in memory.
That should not happen either. Could we see the full schema of the
table you are having trouble with?
> Or shortly: during pg_restore the resource requirements are
> order of magnitude higher than during pg_dump,
We found some client-side memory leaks in pg_restore recently; is that
what you're talking about?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Lee Kindness | 2001-12-11 16:05:34 | Bulkloading using COPY - ignore duplicates? |
Previous Message | Jan Wieck | 2001-12-11 15:34:44 | Re: pg_dump: Sorted output, referential integrity |