From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Abhijit Menon-Sen <ams(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [BUGS] Breakage with VACUUM ANALYSE + partitions |
Date: | 2016-04-25 20:29:36 |
Message-ID: | CA+TgmobdydzhyzsF1gFNu9P=b_uADoM3R-BFGFuF-+heT9OaJw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-hackers |
On Mon, Apr 25, 2016 at 2:05 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Well, I posted a patch. I'd have applied it too (after addressing your
> comments obviously), except that there's some interdependencies with the
> nsmg > 0 thread (some of my tests fail spuriously without that
> fixed). Early last week I waited for a patch on that thread, but when
> that didn't materialize by Friday I switched to work on that [1]. With
> both fixes applied I can't reproduce any problems anymore.
OK. What are the interdependencies? You've said that a few times but
I am not clear on what the nature of those interdependencies is.
> About the delay: Sure, it'd be nicer if I'd addressed this
> immediately. But priority-wise it's an issue that's damned hard to hit,
> and where the worst known consequence is having to reconnect; that's not
> earth shattering. So spending time to work on the, imo more severe
> performance regressions, seemed to be more important; maybe I was wrong
> in priorizing things that way.
Do you mean performance regression related to this patch, or to other
patches like the atomic buffer pin/unpin stuff?
>> We have a patch, and that's good, and I have reviewed it and
>> Thom has tested it, and that's good, too. But it is not clear whether
>> you feel confident to commit it or when you might be planning to do
>> that, so I asked. Given that this is the open item of longest tenure
>> and that we're hoping to ship beta soon, why is that out of line?
>
> Well, if you'd asked it that way, then I'd not responded the way I have.
Fair enough, but keep in mind that a few more mildly-phrased emails
from Noah didn't actually get us to the point where this is fixed, or
even to a clear explanation of when it might be fixed or why it wasn't
getting fixed sooner.
>> That commit introduced a new way to write to blocks that might have in
>> the meantime been removed, and it failed to make that safe.
>
> There's no writing of any blocks involved - the problem is just about
> opening segments which might or might not exist.
As I understand it, that commit introduced a backend-private queue;
when it fills, we issue flush requests for particular blocks. By the
time the flush requests are issued, the blocks might have been
truncated away. So it's not writing blocks in the sense of write(),
but persuading the OS to write those blocks to stable storage.
>> And in fact I think it's a regression that can be
>> expected to be a significant operational problem for people if not
>> fixed, because the circumstances in which it can happen are not very
>> obscure. You just need to hold some pending flush requests in your
>> backend-local queue while some other process truncates the relation,
>> and boom.
>
> I found it quite hard to come up with scenarios to reproduce it. Entire
> segments have to be dropped, the writes to the to-be-dropped segment
> have to result in fully dead rows, only few further writes outside the
> dropped can happen, invalidations may not fix the problem up. But
> obviously it should be fixed.
It's probably a bit tricky to produce a workload where the
truncated-away block is in some backend's private queue, because the
queues are very small and it's not exactly clear what sort of workload
will produce regular relation truncations, and you need both of those
things to hit this. However, I think it's pretty optimistic to think
that there won't be people in that situation, partly because we had a
customer find a bug in an EDB proprietary feature a very similar
whose profile is very similar to this feature less than 2 years ago.
>> I assume you will be pretty darned unhappy if we end up at #2, so I am
>> trying to figure out if we can achieve #1. OK?
>
> Ok.
I'm still not clear on the answer to this. I think the answer is that
you think that this patch is OK to commit with some editing, but the
situation with the nmsgs > 0 patch is not so clear yet because it
hasn't had any review yet. And they depend on each other somehow. Am
I close?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2016-04-25 20:57:20 | Re: [BUGS] Breakage with VACUUM ANALYSE + partitions |
Previous Message | Yaroslav | 2016-04-25 18:54:53 | Re: BUG #14107: Major query planner bug regarding subqueries and indices |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2016-04-25 20:32:48 | Re: Rename max_parallel_degree? |
Previous Message | Tom Lane | 2016-04-25 20:24:43 | Re: Rename max_parallel_degree? |