From: | Zeugswetter Andreas SB <ZeugswetterA(at)wien(dot)spardat(dot)at> |
---|---|
To: | "'pgsql-hackers(at)postgresql(dot)org'" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | physical backup of PostgreSQL |
Date: | 2000-06-26 10:13:14 |
Message-ID: | 219F68D65015D011A8E000006F8590C605BA5990@sdexcsrv1.f000.d0188.sd.spardat.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
After further thought I do think that a physical restore of a backup
done with e.g. tar and pg_log as first file of backup does indeed work.
I had concerns with the incompletely written pg pages, but
those will always be last pages in table data files. The problem
with this page is imho not existent since the offsets for rows inside the
page
stay the same, data after the currently last added row will stay the same.
Thus we only have a problem, that a new row can be half added because
it is split between two system pages. But since it will have an open
(to be rolled back) [x?]tid in respect to pg_log we are certainly not
interested in this row.
Yes, indexes will need to be rebuilt, since they do change page layout
and pointers (e.g. page split) during transactions.
But the simplicity of backing up your database as part of your normal
system backup makes this very interesting to me, and imho to the community.
The speed difference compared to pg_dump is also tremendous.
One helpful improvement for this would be to add a type of file ending
to our files, like *.dat for data and *.idx for indexes, since you will not
want to backup index files. Missing indexes would need to be recreated
on first backend startup.
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Hiroshi Inoue | 2000-06-26 10:18:48 | RE: [HACKERS] CLASSOID patch |
Previous Message | Hiroshi Inoue | 2000-06-26 10:08:54 | RE: File versioning (was: Big 7.1 open items) |