From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Alexander Voytsekhovskyy <young(dot)inbox(at)gmail(dot)com>, PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: BUG in 10.1 - dsa_area could not attach to a segment that has been freed |
Date: | 2017-11-29 15:36:11 |
Message-ID: | CA+TgmobntbbEe2XDk21KmbjVZb6oY+yRt1omkdKkjUwWYbRJ5Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Tue, Nov 28, 2017 at 8:17 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Wed, Nov 29, 2017 at 1:33 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Why not? Can't it just be that the workers are slow getting started?
>
> In the normal non-error control flow, don't we expect
> ExecShutdownGather() to run ExecParallelFinish() before
> ExecParallelCleanup(), meaning that the leader waits for workers to
> finish completely before it detaches itself? Doesn't that need to be
> case to avoid random "unable to map dynamic shared memory segment" and
> "dsa_area could not attach to a segment that has been freed" errors,
> and for the parallel instrumentation shown in EXPLAIN to be reliable?
Oh, hmm.
> Could it be that the leader thought that a worker didn't start up, but
> in fact it did?
Well, I don't know how that could happen, but I can't prove it didn't.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Voytsekhovskyy | 2017-11-29 15:46:27 | Re: BUG in 10.1 - dsa_area could not attach to a segment that has been freed |
Previous Message | Tom Lane | 2017-11-29 14:48:06 | Re: BUG #14936: Huge backend memory usage during schema dump of database with many views |