From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Paul Ramsey <pramsey(at)cleverelephant(dot)ca>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Darafei Praliaskouski <me(at)komzpa(dot)net>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Parallel threads in query |
Date: | 2018-11-01 18:40:37 |
Message-ID: | 20181101184037.c24cgma7y6f4kp3t@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2018-11-01 19:33:39 +0100, Tomas Vondra wrote:
> In theory, simulating such global limit should be possible using a bit
> of shared memory for the current total, per-process counter and probably
> some simple abort handling (say, just like contrib/openssl does using
> ResourceOwner).
Right. I don't think you even need something resowner like, given that
anything using threads better make it absolutely absolutely impossible
that an error can escape.
> A better solution might be to start a bgworker managing a connection
> pool and forward the requests to it using IPC (and enforce the thread
> count limit there).
That doesn't really seem feasible for cases like this - after all, you'd
only use threads to work on individual rows if you wanted to parallelize
relatively granular per-row work or such. Adding cross-process IPC seems
like it'd make that perform badly.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Darafei Komяpa Praliaskouski | 2018-11-01 18:43:27 | Re: Parallel threads in query |
Previous Message | Tomas Vondra | 2018-11-01 18:33:39 | Re: Parallel threads in query |