From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Lamar Owen <lamar(dot)owen(at)wgcr(dot)org> |
Cc: | Jan Wieck <JanWieck(at)Yahoo(dot)com>, Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>, HACKERS <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: (A) native Windows port |
Date: | 2002-07-03 12:06:13 |
Message-ID: | 1025697973.23474.37.camel@taru.tm.ee |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
On Tue, 2002-07-02 at 21:50, Lamar Owen wrote:
> On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
> > Lamar Owen wrote:
> > > [...]
> > > Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
> > > deal of promise for seamless binary 'in place' upgrading. He has been
> > > able to write code to read multiple versions' database structures --
> > > proving that it CAN be done.
>
> > Unfortunately it's not the on-disk binary format of files that causes
> > the big problems. Our dump/initdb/restore sequence is also the solution
> > for system catalog changes.
>
> Hmmm. They get in there via the bki interface, right? Is there an OID issue
> with these? Could differential BKI files be possible, with known system
> catalog changes that can be applied via a 'patchdb' utility? I know pretty
> much how pg_upgrade is doing things now -- and, frankly, it's a little bit of
> a kludge.
>
> Yes, I do understand the things a dump restore does on somewhat of a detailed
> level. I know the restore repopulates the entries in the system catalogs for
> the restored data, etc, etc.
>
> Currently dump/restore handles the catalog changes. But by what other means
> could we upgrade the system catalog in place?
>
> Our very extensibility is our weakness for upgrades. Can it be worked around?
> Anyone have any ideas?
Perhaps we can keep an old postgres binary + old backend around and then
use it in single-user mode to do a pg_dump into our running backend.
IIRC Access does its upgrade databse by copying old databse to new.
Our approach could be like
$OLD/postgres -D $OLD_DATA <pg_dump_cmds | $NEW/postgres -D NEW_BACKEND
or perhaps, while old backend is still running:
pg_dumpall | path_to_new_backend/bin/postgres
I dont think we should assume that we will be able to do an upgrade
while we have less free space than currently used by databases (or at
least by data - indexes can be added later)
Trying to do an in-place upgrade is an interesting CS project, but any
serious DBA will have backups, so they can do
$ psql < dumpfile
Speeding up COPY FROM could be a good thing (perhaps enabling it to run
without any checks and outside transactions when used in loading dumps)
And home users will have databases small enough that they should have
enough free space to have both old and new version for some time.
What we do need is more-or-less solid upgrade path using pg_dump
BTW, how hard would it be to move pg_dump inside the backend (perhaps
using a dynamically loaded function to save space when not used) so that
it could be used like COPY ?
pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;
pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;
----------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | james | 2002-07-03 12:14:04 | README file for postgresql server upgrade |
Previous Message | Roger Mathis | 2002-07-03 11:50:04 | permissions in PgSQL 7.2.1 |
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2002-07-03 12:23:52 | Re: BETWEEN Node & DROP COLUMN |
Previous Message | Rod Taylor | 2002-07-03 11:24:50 | Re: BETWEEN Node & DROP COLUMN |