From: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Admission Control |
Date: | 2010-07-09 03:00:09 |
Message-ID: | 4C3690B9.5090201@catalyst.net.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 09/07/10 14:26, Robert Haas wrote:
> On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
> <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> wrote:
>
>> Purely out of interest, since the old repo is still there, I had a quick
>> look at measuring the overhead, using 8.4's pgbench to run two custom
>> scripts: one consisting of a single 'SELECT 1', the other having 100 'SELECT
>> 1' - the latter being probably the worst case scenario. Running 1,2,4,8
>> clients and 1000-10000 tramsactions gives an overhead in the 5-8% range [1]
>> (i.e transactions/s decrease by this amount with the scheduler turned on
>> [2]). While a lot better than 30% (!) it is certainly higher than we'd like.
>>
> Isn't the point here to INCREASE throughput?
>
>
LOL - yes it is! Josh wanted to know what the overhead was for the queue
machinery itself, so I'm running a test to show that (i.e so I have a
queue with the thresholds set higher than the test will load them).
In the situation where (say) 11 concurrent queries of a certain type
make your system become unusable, but 10 are fine, then constraining it
to have a max of 10 will tend to improve throughput. By how much is hard
to say, for instance preventing the Linux OOM killer shutting postgres
down would be infinite I guess :-)
Cheers
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | KaiGai Kohei | 2010-07-09 03:01:13 | Re: get_whatever_oid, part 2 |
Previous Message | Takahiro Itagaki | 2010-07-09 02:36:13 | Re: patch (for 9.1) string functions |