pg_dump, MVCC and consistency

From: Florian Ledoux <florian(dot)ledoux(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: pg_dump, MVCC and consistency
Date: 2005-10-24 12:29:24
Message-ID: d4f1fdd90510240529p64a9980fl@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello everybody !

I am coming from the (expensive) "Oracle World" and I am a newbie in
PG administration. I am currently working on backup concerns... I am
using pg_dump and I have not encountered any problems but I have some
questions about the internal management of data consistency in PG
server.
I have read some articles about the MVCC mechanism but I can't see how
it handles a consistent "snapshot" of the database during all the
export process.

If I have well understood, the defaut transaction isolation level in
PG is the "read commited" isolation level. If it is the isolation
scheme used by pg_dump how can I be sure that tables accessed at the
end of my export are consistent with those accessed at the begining ?
Does pg_dump use a serializable isolation scheme ?

We have this kind of concerns with Oracle and a "CONSISTENT" flag can
be set in the exp utility to use a consistent snapshot of the database
from the begining to the end of the export process. Unfortunately,
this mode use intensively rollback segments and can drive to obsolete
data (also knows as "Snapshot too old"). Is there the equivalent of
rollback segments in PG ? Is there some issues like "snapshot too old"
with intensive multi-users and transactional databases ?

I have not a good knowledge of PG internal mechanism, I hope that my
questions are clear enough...

Florian

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rich Doughty 2005-10-24 12:42:16 Outer join query plans and performance
Previous Message Romain Vinot 2005-10-24 11:48:28 Re: Migration from 8.0 to 7.4...