From: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
---|---|
To: | Kovacs Zoltan Sandor <tip(at)pc10(dot)radnoti-szeged(dot)sulinet(dot)hu> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pg_dump and pg_restore, speed |
Date: | 2000-08-01 16:10:49 |
Message-ID: | 3.0.5.32.20000802021049.0297ea10@mail.rhyme.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
At 18:26 1/08/00 +0200, Kovacs Zoltan Sandor wrote:
>> Can you send me the actual times? And the command you used?
>I don't think my database is large enough to give you worthy data (~10000
>lines). But, with the old pg_dump I got 5 secs, with the new one: 9 secs.
>I used the -Fc option. The old dump was 377K, the new one is 285K.
The time diff and size diff is because the -Fc format compresses the data
by default. If you did '-Fc -Z0', you would get 0 compression, a larger
file, and a time that was closer to the original.
>pg_restore was fast, it was less than 1 sec. The total restoring time with
>psql was 29 secs from the old dump, 86 secs from the new one.
86 with the new one is a worry. It should not be that much slower - unless
there are a lot of tables, in which case updating of pg_class to disable
triggers for each table will slow it down a bit.
I assume from this that you used pg_restore to create a script, which you
then fed into psql. Is that right?
If so, you might like to try:
% createdb newdb
% pg_restore <backup-file> --db=newdb
this avoids using psql and should be faster. I'd be interested to know how
slow/fast it is. If it it still slow, is there any chance you could mail me
the backup file directly? I'll certainly understand if you say 'no', for
obvious reasons.
>I use Linux
>2.2-12.20 (RedHat 6.1), the server is a 366 MHz Intel Celeron with a
>WD64AA hard disk w/ 2048K cache.
Should be plenty.
>By the way, is there any statistics about the dumping/restoring time with
>larger databases? Is there a faster method to backup and restore a
>database than I did?
As above; dump: use -Z0 (no compression), restore: use --db to attach to
the DB directly.
Statistics I have done with databases of 100,000 records and 1.3M records
show no appreciable difference between the old & new for both dump &
restore. The dump always takes longer if compression is used. I have not
compared it to piping the output to gzip, but I imagine they are similar.
Hope this helps,
Philip.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.C.N. 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Lockhart | 2000-08-01 16:19:07 | Re: RPMs built for Mandrake |
Previous Message | Tom Lane | 2000-08-01 16:02:36 | random() function produces wrong range |