From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Simon Riggs <simon(at)2ndQuadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_dump --snapshot |
Date: | 2013-05-07 17:05:20 |
Message-ID: | 20130507170520.GA14818@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2013-05-07 11:01:48 -0400, Stephen Frost wrote:
> * Andres Freund (andres(at)2ndquadrant(dot)com) wrote:
> > On 2013-05-07 08:54:54 -0400, Stephen Frost wrote:
> > > Agreed- but it also isn't currently possible to make it any smaller.
> >
> > Uh. Why not? I think this is what needs to be fixed instead of making
> > the hole marginally smaller elsewhere.
>
> If we're able to fix it- how would we allow users to take advantage of
> that fix when starting their own snapshot and feeding it to pg_dump?
Depends on the fix. I think the most realistic one is making sure
central definitions haven't changed. In that case they wouldn't have to
do anything.
If we get some DDL lock or something they would probably need to do
something like SELECT pg_prepare_database_dump(); or something - likely
the same pg_dump would use.
> > You can trivially reproduce the
> > problem with pg_dump today:
>
> Agreed. The idea was to simply avoid making it worse. Your argument
> seems to be that it's already horrible and easily broken and therefore
> we can go ahead and make it worse and no one will complain because no
> one has complained about how bad it is already. I don't follow that.
It doesn't change *anything* with the fundamental problems. We could
also say we forbid ALTER TABLE taking exlusive locks because that makes
the problem far more noticeable.
> > > My suggestion was to lock everything that pg_dump locks, which we
> > > clearly have locks for since pg_dump is acquiring them. Also, I don't
> > > believe it'd be that difficult to identify what pg_dump would lock, at
> > > least in a 'default' whole-database run. This is more of a stop-gap
> > > than a complete solution.
> >
> > The problem is that locking - as shown above - doesn't really help all
> > that much.
>
> It helps in that once we have the lock, things aren't changing under us.
> The closer we can keep that to when the transaction starts, the better..
If you look at my example the timing where we take the snapshot isn't
the problem. While we wait for a lock on one object the not-yet-locked
objects can still change, get dropped et al. That window is so big that
the timing around the snapshot acquiration and trying to get the first
lock is insignificant. Remember this is only a problem *if* there is
concurrent DDL. And in that case we very, very likely will have access
exlusive locks and thus will wait for them and have the described
problem.
> > You would have to do it like:
> > 1) start txn
> > 2) acquire DDL prevention lock
> > 3) assert we do not yet have a snapshot
> > 4) acquire snapshot
> > 5) lock objects
> > 6) release DDL lock
> > 7) dump objects/data
> > 8) commit txn
>
> We'd need a '4.5' to get the list of objects, no?
Yea, I had folded that into lock objects.
> > pg_dump doesn't prevent you from running CREATE TEMPORARY TABLE? That
> > would make it unrunnable in many situations. Especially as we cannot
> > easily (without using several connections at once) release locks before
> > ending a transaction.
>
> I agree that we wouldn't want pg_dump preventing all catalog updates,
> but wouldn't we be able to draw a distinction between objects being
> modified and objects being added?
I don't easily see how. Often enough object creation modifies rows at
some point. So we would need intelligence at some higher level. I guess
it might be possible to reuse the event trigger infrastructure if it
could do all that already...
> > > This will be an interesting thing to consider when
> > > implementing MVCC for the catalog.
> >
> > I think using proper mvcc snapsot for catalog scans doesn't, cannot even
> > in all case, imply having to use the user's transaction's snapshot, just
> > one that guarantees a consistent result while a query is running.
>
> The hope would be to see a consistent view of what you can access while
> the transaction is running..
I don't think thats entirely possible. Think e.g. of constraints,
determination of HOTability, ... All those need to look at the database
state as its valid now, not as it was valid back when our snapshot
started.
> > c) Quite possibly the snapshot we need needs to meet some special
> > criterions that pg_dump cannot guarantee.
>
> That's a much more difficult case, of course. It would help to have a
> better understanding of what, exactly, Simon's use-case for this feature
> is to answer if this is an issue or not.
I am sure its an issue for at least one of Simon's use cases. Because
one of them is also mine ;). But I think there are many usecases that
require this.
In the logical decoding patches (submitted previously, will get
resubmitted when 9.3 is getting somewhere), we export a snapshot once we
have read enough WAL that all changes from that point henceforth can be
streamed out. That snapshot is obviously very useful to get a new node
up and running.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2013-05-07 17:12:55 | Re: pg_dump --snapshot |
Previous Message | Dimitri Fontaine | 2013-05-07 17:02:14 | Re: pg_dump --snapshot |