From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: logical changeset generation v6.2 |
Date: | 2013-10-29 14:47:58 |
Message-ID: | 20131029144758.GC21284@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2013-10-28 11:54:31 -0400, Robert Haas wrote:
> > There's one snag I currently can see, namely that we actually need to
> > prevent that a formerly dropped relfilenode is getting reused. Not
> > entirely sure what the best way for that is.
>
> I'm not sure in detail, but it seems to me that this all part of the
> same picture. If you're tracking changed relfilenodes, you'd better
> track dropped ones as well.
What I am thinking about is the way GetNewRelFileNode() checks for
preexisting relfilenodes. It uses SnapshotDirty to scan for existing
relfilenodes for a newly created oid. Which means already dropped
relations could be reused.
I guess it could be as simple as using SatisfiesAny (or even better a
wrapper around SatisfiesVacuum that knows about recently dead tuples).
> Completely aside from this issue, what
> keeps a relation from being dropped before we've decoded all of the
> changes made to its data before the point at which it was dropped? (I
> hope the answer isn't "nothing".)
Nothing. But there's no need to prevent it, it'll still be in the
catalog and we don't ever access a non-catalog relation's data during
decoding.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Leonardo Francalanci | 2013-10-29 14:53:37 | Re: Fast insertion indexes: why no developments |
Previous Message | Andres Freund | 2013-10-29 14:32:37 | Re: CLUSTER FREEZE |