From: | Tomasz Ostrowski <tometzky(at)batory(dot)org(dot)pl> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Cc: | "A(dot) Ozen Akyurek" <ozen(at)ventura(dot)com(dot)tr> |
Subject: | Re: copy a large table raises out of memory exception |
Date: | 2007-12-13 14:50:13 |
Message-ID: | 20071213145007.GA22414@batory.org.pl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote:
> We have a large table (about 9,000,000 rows and total size is about 2.8 GB)
> which is exported to a binary file.
How was it exported? With "COPY tablename TO 'filename' WITH BINARY"?
"The BINARY key word causes all data to be stored/read as binary
format rather than as text. It is somewhat faster than the normal
text mode, but a binary-format file is less portable across machine
architectures and PostgreSQL versions."
http://www.postgresql.org/docs/8.2/static/sql-copy.html
Maybe you are bitten by this "less portable".
> When we run "copy tablename from filepath" command, (...) and
> postgre raises exception "out of memory".
I'd try to use pg_dump/pg_restore in custom format, like this:
pg_dump -a -Fc -Z1 -f [filename] -t [tablename] [olddatabasename]
pg_restore -1 -a -d [newdatabasename] [filename]
Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
Winnie the Pooh
From | Date | Subject | |
---|---|---|---|
Next Message | Kris Jurka | 2007-12-13 14:54:53 | Re: jdbc lob and postgresql |
Previous Message | Simon Riggs | 2007-12-13 14:24:56 | Re: [HACKERS] Slow PITR restore |