From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Paul Ramsey <pramsey(at)cleverelephant(dot)ca>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Darafei Praliaskouski <me(at)komzpa(dot)net>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Parallel threads in query |
Date: | 2018-11-01 19:03:43 |
Message-ID: | 20181101190343.wcxs4mf55cxxx7lt@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2018-11-01 19:57:17 +0100, Tomas Vondra wrote:
> >> I think that very much depends on how expensive the tasks handled by the
> >> threads are. It may still be cheaper than a reasonable IPC, and if you
> >> don't create/destroy threads, that also saves quite a bit of time.
> >
> > I'm not following. How can you have a pool *and* threads? Those seem to
> > be contradictory in PG's architecture? You need full blown IPC with your
> > proposal afaict?
> >
>
> My suggestion was to create a bgworker, which would then internally
> allocate and manage a pool of threads. It could then open some sort of
> IPC (say, as dumb as unix socket). The backends could could then send
> requests to it, and it would respond to them. Not sure why/how would
> this contradict PG's architecture?
Because you said "faster than reasonable IPC" - which to me implies that
you don't do full blown IPC. Which using threads in a bgworker is very
strongly implying. What you're proposing strongly implies multiple
context switches just to process a few results. Even before, but
especially after, spectre that's an expensive proposition.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2018-11-01 19:11:19 | Re: Doubts about pushing LIMIT to MergeAppendPath |
Previous Message | Tomas Vondra | 2018-11-01 18:57:17 | Re: Parallel threads in query |