From: | Greg Smith <greg(at)2ndquadrant(dot)com> |
---|---|
To: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: limiting resources to users |
Date: | 2009-12-01 03:33:06 |
Message-ID: | 4B148E72.3070007@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Craig Ringer wrote:
> I assume you look up the associated backend by looking up the source
> IP and port of the client with `netstat', `lsof', etc, and matching
> that to pg_stat_activity?
There's a bunch of ways I've seen this done:
1) If you spawn the psql process with bash using "&", you can then find
its pid with "$!", then chain through the process tree with ps and
pg_stat_activity as needed to figure out the backend pid.
2) If you know the query being run and it's unique (often the case with
batch jobs run daily for example), you can search for it directly in the
query text of pg_stat_activity.
3) Sometimes the only queries you want to re-nice are local, while
everything else is remote. You might filter down possible pids that way.
4) Massage data from netstat, lsof, or similar tools to figure out which
process you want.
> It makes me wonder if it'd be handy to have a command-line option for
> psql that caused it to spit the backend pid out on stderr.
Inspired by this idea, I just thought of yet another approach. Put this
at the beginning of something you want to track:
COPY (SELECT pg_backend_pid()) TO '/place/to/save/pid';
Not so useful if there's more than one of the query running at once, but
in the "nice a batch job" context it might be usable.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2009-12-01 03:55:59 | Re: limiting resources to users |
Previous Message | Schwaighofer Clemens | 2009-12-01 02:39:06 | Re: duplicating a schema |