Storing large files in multiple schemas: BLOB or BYTEA

From: <tigran2-postgres(at)riatest(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Storing large files in multiple schemas: BLOB or BYTEA
Date: 2012-10-11 06:14:42
Message-ID: 010c01cda777$b6a2c5c0$23e85140$@riatest.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>Yeah, a pg_dump mode that dumped everything but large objects would be
nice.

There is option -b for pg_dump which controls whether large objects are
dumped or no. The problem is that with option -b it dumps all large objects
regardless of what schema you requested it to dump using option -n.
Otherwise it works fine.

>I'm now wondering about the idea of implementing a pg_dump option that
>dumped large objects into a directory tree like
> lobs/[loid]/[lob_md5]
>and wrote out a restore script that loaded them using `lo_import`.
>
>During dumping temporary copies could be written to something like
>lobs/[loid]/.tmp.nnnn with the md5 being calculated on the fly as the
>byte stream is read. If the dumped file had the same md5 as the existing
>one it'd just delete the tempfile; otherwise the tempfile would be
>renamed to the calculated md5.
>
>That way incremental backup systems could manage the dumped LOB tree
>without quite the same horrible degree of duplication as is currently
>faced when using lo in the database with pg_dump.
>
>A last_modified timestamp on `pg_largeobject_metadata` would be even
>better, allowing the cost of reading and discarding rarely-changed large
>objects to be avoided.

Definitely interesting idea with incremental backups.

Browse pgsql-general by date

  From Date Subject
Next Message F. BROUARD / SQLpro 2012-10-11 06:43:44 Re: moving from MySQL to pgsql
Previous Message Vineet Deodhar 2012-10-11 06:07:31 Re: moving from MySQL to pgsql