7.1 dumps with large objects

From: "David Wall" <d(dot)wall(at)computer(dot)org>
To: <pgsql-general(at)postgresql(dot)org>
Subject: 7.1 dumps with large objects
Date: 2001-04-14 17:58:56
Message-ID: 000901c0c50c$93662c00$5a2b7ad8@expertrade.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Wonderful job on getting 7.1 released. I've just installed it in place of a
7.1beta4 database, with the great advantage of not even having to migrate
the database.

It seems that 7.1 is able to handle large objects in its dump/restore
natively now and no longer requires the use of the contrib program to dump
them. Large objects are represented by OIDs in the table schema, and I'm
trying to make sure that I understand the process correctly from what I've
read in the admin guide and comand reference guide.

In my case, the OID does not mean anything to my programs, and they are not
used as keys. So I presume that I don't really care about preserving OIDs.
Does this just mean that if I restore a blob, it will get a new OID, but
otherwise everything will be okay?

This is my plan of attack:

To backup my database (I have several databases running in a single
postgresql server, and I'd like to be able to back them up separately since
they could move from one machine to another as the loads increase), I'll be
using:

pg_dump -b -Fc dbname > dbname.dump

Then, to restore, I'd use:

pg_restore -d dbname dbname.dump

Is that going to work for me?

I also noted that pg_dump has a -Z level specifier for compression. When
not specified, the backup showed a compression level of "-1" (using
pg_restore -l). Is that the highest compression level, or does that mean it
was disabled? I did note that the -Fc option created a file that was larger
than a plain file, and not anywhere near as small as if I gzip'ed the
output. In my case, it's a very small test database, so I don't know if
that's the reason, or whether -Fc by itself doesn't really compress unless
the -Z option is used.

And for -Z, is 0 or 9 the highest level compression? Is there a particular
value that's generally considered the best tradeoff in terms of speed versus
space?

Thanks,
David

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Larard 2001-04-14 18:47:18 Re: 7.1 dumps with large objects
Previous Message Tom Lane 2001-04-14 17:29:24 Re: sets and insert-select with rule