From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com> |
Cc: | Jeff Davis <pgsql(at)j-davis(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: group locking: incomplete patch, just for discussion |
Date: | 2014-11-21 13:31:01 |
Message-ID: | CA+TgmoZrONqcXWWjebbfbKoD5qcqNc1XPoTGmO7RiiG=C1Qaqw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Nov 20, 2014 at 8:31 PM, Andres Freund <andres(at)2ndquadrant(dot)com> wrote:
>> I can't follow that logic. Why do you think self-exclusive locks are
>> the only ones that matter?
>
> All the others can be acquired by jumping in front of the lock's
> waitqueue?
That's is true, as a practical matter, in many cases. But from the
point of view of the lock manager, it's really not true. It is almost
true if you assume that the lock level acquired by the parallel
backend will be weaker than the lock level held by the user backed,
but not quite, because RowExclusiveLock conflicts with ShareLock.
The assumption that the parallel backend's lock level will always be
weaker seems shaky. It's probably true for relation locks, but if we
ever want to support parallel workers that can write any data
whatsoever, they're going to need to be able to take relation
extension locks. If one backend in a parallel group attempts to
acquire the relation extension lock while another worker holds it, the
rule that all group members should be regarded as waiting for each
other will result in instantaneous deadlock. That's pretty
undesirable, and suggests to me that the whole model is defective in
some way.
>> > I don't buy the plan time/execution time argument. We'll need to be able
>> > to adapt plans to the availability of bgworker slots *anyway*. Otherwise
>> > we either need to to set the number of bgworkers to "MaxConnections *
>> > MaxParallelism" or live with errors as soon as too many processes use
>> > parallelism. The former is clearly unrealistic given PG's per-backend
>> > overhead and the latter will be a *much* bigger PITA than all the
>> > deadlock scenarios talked about here. It'll constantly fail, even if
>> > everything is ok.
>>
>> I certainly think that any parallel node needs to be able to cope with
>> getting fewer workers than it would ideally like to have. But that's
>> different from saying that when our own backend takes new locks, we
>> have to invalidate plans. Those seem like largely unrelated problems.
>
> No need to invalidate them if you have the infrastructure to run with no
> parallel workers - just use the path that's used if there's no workers
> available.
Well, it's OK for a query to run with less paralellism than the
planner thought optimal. It's not OK for it to attempt to run with
the requested degree of parallelism and deadlock. I think if the
deadlock detector gets involved it's quite unlikely that you're going
to be able to make things degrade smoothly to non-parallel operation.
>> I
>> seriously doubt that can ever be made safe, but I'd choose to prohibit
>> it using something other than the heavyweight lock manager.
>
> Right. It's perfectly fine to forbid it imo. But there's lots of places
> that have assumptions about the locking behaviour baked in, and the
> relevant bugs will be hard to find.
That's pretty vague. I think we can rule out the great bulk of the
problem cases by prohibiting backends from starting a DDL command
while parallel mode is in effect. (We may be parallelizing a DDL
command or just a regular query, but once we've initiated parallelism,
no NEW DDL commands can be started.)
We should focus on trying to foresee what can go wrong apart from
that. That's why I think your example about smgrDoPendingDeletes() is
a pretty helpful one to think about, but this one seems like something
that has no chance of slipping through the cracks.
>> The heavyweight locking issue is really killing me, though.
>
> I don't really understand why you're not content with just detecting
> deadlocks for now. Everything else seems like bells and whistles for
> later.
I don't think it's OK for a parallel query to just randomly deadlock.
It's OK for parallel query to excuse itself politely and retire from
the field, but I don't think it's OK for it to say, oh, sure, I can
parallelize that for you, and then fail by deadlocking with itself.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | ktm@rice.edu | 2014-11-21 13:49:45 | Re: What exactly is our CRC algorithm? |
Previous Message | Heikki Linnakangas | 2014-11-21 13:17:51 | Re: WAL format and API changes (9.5) |