From: | Stefan Krompass <legend_wok(at)yahoo(dot)com(dot)hk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Overload |
Date: | 2005-04-15 15:34:16 |
Message-ID: | 425FDEF8.6060503@yahoo.com.hk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I'd like to implement a system to prevent a PostgreSQL database from
being overloaded by delaying queries when the database is already highly
loaded. I.e. the the sum of the execution costs of queries currently in
the database is already near a certain threshold and executing the
"next" query would cause the execution costs to pass this threshold.
Limiting the number of queries concurrently in the database to a fixed
number n is out of question since - in my opinion - n simple
SELECT c FROM t WHERE c="..."
would generally produce a much lower workload than n complex queries.
So, the goal is some more dynamic approach.
But my problem is to measure the execution costs of a query. My first
thought was to use the estimates of the optimizer but these estimates
only give the time needed to execute the query.
I know that the term "execution costs" is somewhat imprecise. Ideally,
the value for the execution costs is a value that "merges" the I/O and
the CPU usage used by the query (to be more precise: estimates about the
I/O and CPU usage for the query). I've read the developer manuals but I
didn't find any information on this. Does PostgreSQL offer information
on the additional workload (execution costs) caused by a query? In case
it does not: Does anybody have an idea how I get an estimate for the
execution costs before executing a query?
Thanks in advance
Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Kellerer | 2005-04-15 15:47:45 | Re: SQL Question |
Previous Message | Stephane Bortzmeyer | 2005-04-15 15:25:03 | Re: Installing PostgreSQL in Debian |