From: | Noah Misch <noah(at)leadboat(dot)com> |
---|---|
To: | Greg Stark <stark(at)mit(dot)edu>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Peter Geoghegan <pg(at)bowt(dot)ie>, Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Kevin Grittner <kgrittn(at)gmail(dot)com> |
Subject: | Re: snapshot too old issues, first around wraparound and then more. |
Date: | 2021-06-18 03:49:31 |
Message-ID: | 20210618034931.GB1059064@rfd.leadboat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jun 16, 2021 at 12:00:57PM -0400, Tom Lane wrote:
> Greg Stark <stark(at)mit(dot)edu> writes:
> > I think Andres's point earlier is the one that stands out the most for me:
> >
> > > I still think that's the most reasonable course. I actually like the
> > > feature, but I don't think a better implementation of it would share
> > > much if any of the current infrastructure.
> >
> > That makes me wonder whether ripping the code out early in the v15
> > cycle wouldn't be a better choice. It would make it easier for someone
> > to start work on a new implementation.
Deleting the feature early is better than deleting the feature late,
certainly. (That doesn't tell us about the relative utility of deleting the
feature early versus never deleting the feature.)
> > Fwiw I too think the basic idea of the feature is actually awesome.
> > There are tons of use cases where you might have one long-lived
> > transaction working on a dedicated table (or even a schema) that will
> > never look at the rapidly mutating tables in another schema and never
> > trigger the error even though those tables have been vacuumed many
> > times over during its run-time.
>
> I agree that's a great use-case. I don't like this implementation though.
> I think if you want to set things up like that, you should draw a line
> between the tables it's okay for the long transaction to touch and those
> it isn't, and then any access to the latter should predictably draw an
> error.
I agree that would be a useful capability, but it solves a different problem.
> I really do not like the idea that it might work anyway, because
> then if you accidentally break the rule, you have an application that just
> fails randomly ... probably only on the days when the boss wants that
> report *now* not later.
Every site adopting SERIALIZABLE learns that transactions can fail due to
mostly-unrelated concurrent activity. ERRCODE_SNAPSHOT_TOO_OLD is another
kind of serialization failure, essentially. Moreover, one can opt for an
old_snapshot_threshold value longer than the runtime of the boss's favorite
report. Of course, nobody would reject a replacement that has all the
advantages of old_snapshot_threshold and fewer transaction failures. Once
your feature rewrite starts taking away advantages to achieve fewer
transaction failures, that rewrite gets a lot more speculative.
nm
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2021-06-18 03:50:50 | Re: Decoding speculative insert with toast leaks memory |
Previous Message | Amit Kapila | 2021-06-18 03:48:58 | Re: Fix for segfault in logical replication on master |