From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Qiu Xiafei <qiuxiafei(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Is is safe to use SPI in multiple threads? |
Date: | 2016-12-11 06:50:56 |
Message-ID: | CAB7nPqTocQb85rJqt_SzdrC-LbXFgVx1czSHayWuNJLxjXxcTQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Dec 11, 2016 at 2:39 PM, Qiu Xiafei <qiuxiafei(at)gmail(dot)com> wrote:
> Because of the one-backend-per-session concept of PG, I think I should bind
> one my DSL session to one bg worker only. It seems work. But is there a way
> to launch a bg worker when a new session starts, just like pg's
> per-session-backend do? Is it possible to run a bg worker for incoming
> sessions and to launch a new bg worker to handle the session when it comes?
There is the concept of dynamic background workers in Postgres. That's
what for example parallel query uses. At planning a number of workers
thought as suited is selected, and then spawned dynamically at
execution, at the limit defined by max_worker_processes though. Have a
look at worker_spi in the code tree, which is a module that does
present a way to spawn workers dynamically. This infrastructure could
allow for example anybody to re-create what autovacuum does with a
launcher process and workers created depending on what needs to be
cleaned up per database as a plugin.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Torsten Förtsch | 2016-12-11 10:31:17 | Re: logical decoding output plugin |
Previous Message | Qiu Xiafei | 2016-12-11 05:39:35 | Re: Is is safe to use SPI in multiple threads? |