From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)2ndquadrant(dot)com>, Bernd Helmle <mailings(at)oopsware(dot)de>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Better support of exported snapshots with pg_dump |
Date: | 2014-09-04 14:33:39 |
Message-ID: | CAB7nPqTg_o+VRYywPEahQFKgUWxogC+vitcOpoEM8UQ1KXY5OQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 3, 2014 at 11:57 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I didn't find that option to be terribly important then, but I don't
> see how we can possibly get by without it now, unless our goal is to
> make logical decoding as hard to use as we possibly can.
Yes. With 9.4 it is possible to take a consistent database snapshot
when creating a slot but it is tricky because of how ephemeral
exported snapshots are:
- When using CREATE_REPLICATION_SLOT, an exported snapshot lives only
for the time replication connection is done.
- pg_export_snapshot result only lives for the duration of the
transaction where function is called
- pg_create_logical_replication_slot cannot export a snapshot
So now (if I am correct), the only way to get a consistent dump from
database is to maintain open a replication connection after opening a
replication slot on it. Still it is really application-dependent,
assuming as well that schema is not modified as mentioned in this
thread. Any ways to facilitate the user experience on this side would
be a great step for things like online upgrades. Perhaps we could get
pg_dump or a wrapper on top of pg_dump creating a logical replication
slot, then taking a consistent image of the database it is based on
while replication connection is open.
> Tom's got a good point about the order of locking vs. snapshot taking,
> but I think the way to address that is by adding some capability to
> temporarily lock out all DDL on non-temporary objects across the
> entire system, rather than by trying to make pg_dump (or the walsender
> creating the replication slot) lock every table. Even if we could get
> that to work, it still leaves the very-much-related problem that dumps
> of databases containing many tables can easily exhaust the lock table.
Yes this is an idea to dig. Having system-wide DDL locking is
something that has been discussed at some point in XC development for
the addition of new nodes (needed to ensure that schema was consistent
during migration of data) if I recall correctly. Now looking quickly
at the XC code git-grepping is showing a method based on
pg_try_advisory_lock_shared and a global boolean variable set in
PostgresMain, coupled with a check in ProcessUtility preventing a
certain category of DDL from running if a lock is taken. The good
point is that there is already some work done to detect what are the
utility statements that could be allowed even if lock is hold
(EXECUTE, VACUUM, CLUSTER, etc.).
Now, wouldn't a variable in shared memory controlled by some system
function a better option? There are as well some utility code paths
that we wouldn't want to block so we would end up with a switch on all
the DDL Stmt nodes or a large portion of them. Thoughts?
Regards,
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Hannu Krosing | 2014-09-04 14:45:06 | Re: PL/pgSQL 1.2 |
Previous Message | Kevin Grittner | 2014-09-04 14:14:22 | Re: Scaling shared buffer eviction |