From: | Joe Uhl <joeuhl(at)gmail(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Greg Smith <gsmith(at)gregsmith(dot)com>, Gregory Stark <stark(at)enterprisedb(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: High CPU Utilization |
Date: | 2009-03-24 18:47:36 |
Message-ID: | 1E10954A-136D-4E1E-B576-31D09EE46499@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:
> On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl <joeuhl(at)gmail(dot)com> wrote:
>>
>> On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
>
>>> What does the cs entry on vmstat say at this time? If you're cs is
>>> skyrocketing then you're getting a context switch storm, which is
>>> usually a sign that there are just too many things going on at
>>> once /
>>> you've got an old kernel things like that.
>>
>> cs column (plus cpu columns) of vmtstat 1 30 reads as follows:
>>
>> cs us sy id wa
>> 11172 95 4 1 0
>> 12498 94 5 1 0
>> 14121 91 7 1 1
>> 11310 90 7 1 1
>> 12918 92 6 1 1
>> 10613 93 6 1 1
>> 9382 94 4 1 1
>> 14023 89 8 2 1
>> 10138 92 6 1 1
>> 11932 94 4 1 1
>> 15948 93 5 2 1
>> 12919 92 5 3 1
>> 10879 93 4 2 1
>> 14014 94 5 1 1
>> 9083 92 6 2 0
>> 11178 94 4 2 0
>> 10717 94 5 1 0
>> 9279 97 2 1 0
>> 12673 94 5 1 0
>> 8058 82 17 1 1
>> 8150 94 5 1 1
>> 11334 93 6 0 0
>> 13884 91 8 1 0
>> 10159 92 7 0 0
>> 9382 96 4 0 0
>> 11450 95 4 1 0
>> 11947 96 3 1 0
>> 8616 95 4 1 0
>> 10717 95 3 1 0
>>
>> We are running on 2.6.28.7-2 kernel. I am unfamiliar with vmstat
>> output but
>> reading the man page (and that cs = "context switches per second")
>> makes my
>> numbers seem very high.
>
> No, those aren't really all that high. If you were hitting cs
> contention, I'd expect it to be in the 25k to 100k range. <10k
> average under load is pretty reasonable.
>
>> Our sum JDBC pools currently top out at 400 connections (and we are
>> doing
>> work on all 400 right now). I may try dropping those pools down even
>> smaller. Are there any general rules of thumb for figuring out how
>> many
>> connections you should service at maximum? I know of the memory
>> constraints, but thinking more along the lines of connections per
>> CPU core.
>
> Well, maximum efficiency is usually somewhere in the range of 1 to 2
> times the number of cores you have, so trying to get the pool down to
> a dozen or two connections would be the direction to generally head.
> May not be reasonable or doable though.
Turns out we may have an opportunity to purchase a new database server
with this increased load. Seems that the best route, based on
feedback to this thread, is to go whitebox, get quad opterons, and get
a very good disk controller.
Can anyone recommend a whitebox vendor?
Is there a current controller anyone on this list has experience with
that they could recommend?
This will be a bigger purchase so will be doing research and
benchmarking but any general pointers to a vendor/controller greatly
appreciated.
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2009-03-24 19:29:16 | Re: High CPU Utilization |
Previous Message | Tom Lane | 2009-03-24 13:45:13 | Re: Why creating GIN table index is so slow than inserting data into empty table with the same index? |