[pg 7.1.rc2] pg_restore and large tables

From: ow <oneway_111(at)yahoo(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: [pg 7.1.rc2] pg_restore and large tables
Date: 2003-11-12 16:55:34
Message-ID: 20031112165534.98881.qmail@web21401.mail.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin


Hi,

Trying to restore a table that has about 80 million records. The database was
dumped and restored according to the following procedure:

1) dump the db, data only
time /usr/local/pgsql/bin/pg_dump -abf ./pgsql.7.4.rc1.pgdump.Z --format=c
--compress=6 -U postgres testdb

2) create db schema from a separate file, including table structures,
constraints, indexes
3) edit restore order to satisfy the constraints
4) restore the db
time /usr/local/pgsql/bin/pg_restore -d testdb -U postgres -a
./pgsql.7.4.rc1.pgdump.Z -L ./restoreOrder.txt --verbose

pg_restore has been running for 14 hours now and it does not appear that
there's any end in sight. Meanwhile, postmaster is slowly eating away at the
memory, it now has 46% of all available memory with about 900MB on swap. HD
activity is non-stopping.

In retrospective, I guess, the table with 80M records could've been created
without indexes (it has 3, pk & ak constraints and fk index) to speed up the
pg_restore ... but then I'm not sure if creating the indexes afterwards
would've been much faster. Anything I'm doing wrong?

Thanks

__________________________________
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message ow 2003-11-12 17:10:44 Re: [pg 7.4.rc2] pg_restore and large tables
Previous Message Daniel Rubio 2003-11-12 14:04:01 Re: pg_dump still requires password