| From: | Ron Snyder <snyder(at)roguewave(dot)com> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: postgreSQL 7.3.8, pg_dump not able to find large o |
| Date: | 2005-06-09 17:06:10 |
| Message-ID: | D486606E7AD20947BDB7E56862E04C39015CF223@cvo1.cvo.roguewave.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> We've been getting errors similar to the following (the specific large
> object that is "missing" is different every time) during our nightly
> pg_dump:
>
> pg_dump: dumpBlobs(): could not open large object: ERROR:
> inv_open: large
> object 48217896 not found
>
After doing a bunch of testing and experimenting, we're pretty sure that the
problem we were having is due to the large objects being deleted while the
pg_dump was running. The entire pg_dump process takes about 2 hours, with
about 1 1/2 hours of that spent on blobs. My question is this: How are other
PostgreSQL users with constant large object insertions/deletions handling
their backup process? (And is this something that I missed in documentation
somewhere?)
Is this a problem that is handled differently in PostgreSQL 8?
Thanks,
-ron
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Phil Endecott | 2005-06-09 17:07:51 | Re: Propogating conditions into a query |
| Previous Message | Zlatko Matic | 2005-06-09 16:52:05 | Re: Pb with linked tables on PG8 |