From: | "Gauthier, Dave" <dave(dot)gauthier(at)intel(dot)com> |
---|---|
To: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: limiting resources to users |
Date: | 2009-11-26 05:47:37 |
Message-ID: | 482E80323A35A54498B8B70FF2B8798004225241F7@azsmsx504.amr.corp.intel.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Actually, limiting resources on a per DB basis would work for me too.
I thin kOracle had a thing where you could limit resources, or at least prioritize users.
Someone else was reporting this to me, and again, it was a MySQL DB that I'll eventually be replacing with PG. I'll inherit this problem too, so I'm being proactive in looking for a fix. He said that the cpu was pegged (no mention of I/O). He started killing processes left and write until performace was back (it was either that or a DB restart to clear the slate).
MAny users use cron jobs to "refresh" reports and they're not squemish about refreshing very often. What I'd like to see is that they get their 3% of the server's attention to di this and no more, leaving the other 97% free for non-abusive users.
Thanks for all the help on this !!
-dave
-----Original Message-----
From: Craig Ringer [mailto:craig(at)postnewspapers(dot)com(dot)au]
Sent: Wednesday, November 25, 2009 8:43 PM
To: Gauthier, Dave
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] limiting resources to users
Gauthier, Dave wrote:
> Is there a way in Postgres to limit how much cpu, memory, other
> resources a certain DB gets? A MySQL DB that I'm now replacing with PG
> has been having problems with run-away users that pound one DB with
> intense processes running periodically in cron jobs. The effect is that
> it takes up all the resources and the users of other DBs suffer because
> the CPU is pegged servicing the first guy.
As others have noticed, it may not just be CPU. I/O is usually the
bottleneck in a DBMS. CPU usage may show up, but if it's shown as
"IOWait" then it's actually time a process spends waiting for disk I/O,
not "real" CPU time used.
Whether the issue is with disk I/O or with CPU, PostgreSQL does not have
anything much in the way of facilities to set priorities for users or
queries.
For CPU, as others have noticed, you can use "nice", but it's a bit
clumsy. Personally I'd recommend installing a C-language function
"nice_backend(prio)" that renices the calling backend to "prio". Limit
"prio" to >= 0, and provide a second function
"nice_backend_super(pid,prio)" that can renice any backend pid and set
any priority, but is usable only by the superuser. I don't know of any
canned implementations of these, but it shouldn't be hard to whip them
up based on the C user defined function documentation and examples,
using the nice(2) system call to do the dirty work.
I/O is harder. Some operating systems offer I/O priorities for
processes, like Linux's ionice, and you'd think you could use these in a
similar way to how you use 'nice'. Unfortunately, that won't work well,
because a lot of the work PostgreSQL does - especially disk writes - are
done via a helper background writer process working from memory shared
by all backends. Similarly, the write-ahead logs are managed by their
own process via shared memory. So it's very hard to effectively give one
user priority over another for writes. ionice should be somewhat
effective for reads, though.
Additionally, PostgreSQL doesn't have any notion of locking priority. A
"higher priority" transaction has no way to automatically terminate a
lower priority transaction that's doing slow work while holding a
required lock. You can manually terminate the problem process with
pg_cancel_backend(...) after identifying that it holds desired locks,
but this is painful to say the least. This issue only comes up where the
transactions are running in the same database, not just the same
cluster, but it can be a pain. Since it sounds like your priorities are
per-database not per-user or per-query, it probably doesn't matter to
you, but I thought I'd mention it.
> The PG versio will have one PG instance with many DBs. One set of
> these DBs are vital for operations and should have the highest priority.
The best bet here is to have multiple PostgreSQL clusters running on the
machine - if you can't have separate hardware. Have one cluster (ie: a
postmaster with its own listening port, data dir, write-ahead logs, etc)
set to a higher CPU and I/O priority than the other when it starts up.
Those priorities will be inherited by child backends, so connections to
that cluster will have a higher priority on system resources.
Do the same for the badly behaved users - put them in their own cluster,
and renice them down for lower priority access to CPU and I/O resources.
You could even be mean and (if running on Linux) put them in the "Idle"
I/O priority, though that'd probably result in *awful* performance.
The downside of separating DBs into multiple clusters grouped by
priority is that each cluster runs on a different port, it must be
backed up separately, all tuning and configuration must be done
separately - including user management - etc. So there's more work involved.
Unfortunately, you must also allocate separate blocks of shared memory
to the clusters. Even when the high priority cluster is idle, the low
priority one will not be able to use its shared memory for caching, and
vice versa. So you may need more RAM.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2009-11-26 06:34:24 | Re: limiting resources to users |
Previous Message | Craig Ringer | 2009-11-26 04:07:18 | Wiki page on vacuum full |