| From: | "C(dot)S(dot)Park" <pcs(at)mhlx01(dot)kek(dot)jp> |
|---|---|
| To: | pgsql-hackers(at)postgresql(dot)org |
| Cc: | "C(dot)S(dot)Park" <pcs(at)mhlx01(dot)kek(dot)jp> |
| Subject: | [Q] pg_dump with large object & backend cache... |
| Date: | 1999-08-18 08:43:27 |
| Message-ID: | 19990818174327.A30724@mhlx01.kek.jp |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hello,
We are using v6.3.2 with patches, and using many tables which
uses 'large object'. Main problem with it is
(1) cannot dump large object, is this still true in v6.5.1 ?
or no plan for immplementing dumping blobs in near future?
(2) if I want to clone dbs from linux to solaris machine, due to the above
lo & pg_dump problem, lots of manual works needed to dump dbs
-> restore to another architecture machine... Is there any utility
to ease duplication(backup) of dbs?
(3) to upgrade v6.3.2 dbs to v6.5.1, including large objects, is there
ways to dump/restore?
Another problem with v6.3.2 is frequent messages(error?) related to
the backend cache invalidation failure -- probably posted many times...
like this:
NOTICE: SIAssignBackendId: discarding tag 2147430138
Connection databese 'request' failed.
FATAL 1: Backend cache invalidation initialization failed
(1) Increasing max connection # from 32 to 64 in
src/include/storage/sinvaladt.h will simply fix above problem?
(2) If I want to keep v6.3.2, which PATCH will FIX above problem?
(3) already fixed in v6.5.1?
Best Regards,
C.S.Park
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tatsuo Ishii | 1999-08-18 08:44:30 | vacuum process size |
| Previous Message | Ansley, Michael | 1999-08-18 07:55:21 | RE: [HACKERS] Problem with query length |