From: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
---|---|
To: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: CONNECTION LIMIT and Parallel Query don't play well together |
Date: | 2017-02-15 16:19:02 |
Message-ID: | d100f62a-0606-accc-693b-cdc6d16b9296@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 1/11/17 5:51 PM, David Rowley wrote:
> Now, since background workers
> don't consume anything from max_connections, then I don't really feel
> that a background worker should count towards "CONNECTION LIMIT". I'd
> assume any CONNECTION LIMITs that are set for a user would be
> calculated based on what max_connections is set to. If we want to
> limit background workers in the same manner, then perhaps we'd want to
> invent something like "WORKER LIMIT N" in CREATE USER.
This explanation makes sense, but it kind of upset my background
sessions patch, which would previously have been limited by per-user
connection settings.
So I would like to have a background worker limit per user, as you
allude to. Attached is a patch that implements a GUC setting
max_worker_processes_per_user.
Besides the uses for background sessions, but it can also be useful for
parallel workers, logical replication apply workers, or things like
third-party partitioning extensions.
Thoughts?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment | Content-Type | Size |
---|---|---|
0001-Add-max_worker_processes_per_user-setting.patch | text/x-patch | 5.7 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-02-15 16:23:12 | Re: CONNECTION LIMIT and Parallel Query don't play well together |
Previous Message | Robert Haas | 2017-02-15 16:14:59 | Re: Partitioned tables and relfilenode |