From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: intermittent failures in Cygwin from select_parallel tests |
Date: | 2017-06-15 21:06:41 |
Message-ID: | 9641.1497560801@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> I think you're right. So here's a theory:
>> 1. The ERROR mapping the DSM segment is just a case of the worker the
>> losing a race, and isn't a bug.
> I concur that this is a possibility,
Actually, no, it isn't. I tried to reproduce the problem by inserting
a sleep into ParallelWorkerMain, and could not. After digging around
in the code, I realize that the leader process *can not* exit the
parallel query before the workers start, at least not without hitting
an error first, which is not happening in these examples. The reason
is that nodeGather cannot deem the query done until it's seen EOF on
each tuple queue, which it cannot see until each worker has attached
to and then detached from the associated shm_mq.
(BTW, this also means that the leader is frozen solid if a worker
process fails to start, but we knew that already.)
So we still don't know why lorikeet is sometimes reporting "could not map
dynamic shared memory segment". It's clear though that once that happens,
the current code has no prayer of recovering cleanly. It looks from
lorikeet's logs like there is something that is forcing a timeout via
crash after ~150 seconds, but I do not know what that is.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-06-15 21:08:23 | pg_waldump command line arguments |
Previous Message | Robert Haas | 2017-06-15 21:04:17 | Re: WIP: Data at rest encryption |