From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | dynamic background workers |
Date: | 2013-06-14 21:00:13 |
Message-ID: | CA+TgmoYtQQ-JqAJPxZg3Mjg7EqugzqQ+ZBrpnXo95chWMCZsXw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Parallel query, or any subset of that project such as parallel sort,
will require a way to start background workers on demand. Thanks to
Alvaro's work on 9.3, we now have the ability to configure background
workers via shared_preload_libraries. But if you don't have the right
library loaded at startup time, and subsequently wish to add a
background worker while the server is running, you are out of luck.
Even if you do have the right library loaded, but want to start
workers in response to user activity, rather than when the database
comes on-line, you are also out of luck. Relaxing these restrictions
is essential for parallel query (or parallel processing of any kind),
and useful apart from that. Two patches are attached.
The first patch, max-worker-processes-v1.patch, adds a new GUC
max_worker_processes, which defaults to 8. This fixes the problem
discussed here:
Apart from fixing that problem, it's a pretty boring patch.
The second patch, dynamic-bgworkers-v1.patch, revises the background
worker API to allow background workers to be started dynamically.
This requires some communication channel from ordinary workers to the
postmaster, because it is the postmaster that must ultimately start
the newly-registered workers. However, that communication channel has
to be designed pretty carefully, lest a shared memory corruption take
out the postmaster and lead to inadvertent failure to restart after a
crash. Here's how I implemented that: there's an array in shared
memory of a size equal to max_worker_processes. This array is
separate from the backend-private list of workers maintained by the
postmaster, but the two are kept in sync. When a new background
worker registration is added to the shared data structure, the backend
adding it uses the existing pmsignal mechanism to kick the postmaster,
which then scans the array for new registrations. I have attempted to
make the code that transfers the shared_memory state into the
postmaster's private state as paranoid as humanly possible. The
precautions taken are documented in the comments. Conversely, when a
background worker flagged as BGW_NEVER_RESTART is considered for
restart (and we decide against it), the corresponding slot in the
shared memory array is marked as no longer in use, allowing it to be
reused for a new registration.
Since the postmaster cannot take locks, synchronization between the
postmaster and other backends using the shared memory segment has to
be lockless. This mechanism is also documented in the comments. An
lwlock is used to prevent two backends that are both registering a new
worker at about the same time from stomping on each other, but the
postmaster need not care about that lwlock.
This patch also extends worker_spi as a demonstration of the new
interface. With this patch, you can CREATE EXTENSION worker_spi and
then call worker_spi_launch(int4) to launch a new background worker,
or combine it with generate_series() to launch a bunch at once. Then
you can kill them off with pg_terminate_backend() and start some new
ones. That, in my humble opinion, is pretty cool.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachment | Content-Type | Size |
---|---|---|
max-worker-processes-v1.patch | application/octet-stream | 14.5 KB |
dynamic-bgworkers-v1.patch | application/octet-stream | 35.0 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2013-06-14 22:28:06 | [RFC] Minmax indexes |
Previous Message | Fabien COELHO | 2013-06-14 20:47:11 | Re: [PATCH] pgbench --throttle (submission 7 - with lag measurement) |