From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Glyn Astill <glynastill(at)yahoo(dot)co(dot)uk> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, david(at)lang(dot)hm, Steve Clark <sclark(at)netwolves(dot)com>, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Linux: more cores = less concurrency. |
Date: | 2011-04-12 13:24:34 |
Message-ID: | BANLkTine3A9UUT4oYHTNA1tpda6ZtAmZ-A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Apr 12, 2011 at 8:23 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> On Tue, Apr 12, 2011 at 3:54 AM, Glyn Astill <glynastill(at)yahoo(dot)co(dot)uk> wrote:
>> --- On Tue, 12/4/11, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>>
>>> >> The issue I'm seeing is that 8 real cores
>>> outperform 16 real
>>> >> cores, which outperform 32 real cores under high
>>> concurrency.
>>> >
>>> > With every benchmark I've done of PostgreSQL, the
>>> "knee" in the
>>> > performance graph comes right around ((2 * cores) +
>>> > effective_spindle_count). With the database fully
>>> cached (as I
>>> > believe you mentioned), effective_spindle_count is
>>> zero. If you
>>> > don't use a connection pool to limit active
>>> transactions to the
>>> > number from that formula, performance drops off. The
>>> more CPUs you
>>> > have, the sharper the drop after the knee.
>>>
>>> I was about to say something similar with some canned
>>> advice to use a
>>> connection pooler to control this. However, OP
>>> scaling is more or
>>> less topping out at cores / 4...yikes!. Here are my
>>> suspicions in
>>> rough order:
>>>
>>> 1. There is scaling problem in client/network/etc.
>>> Trivially
>>> disproved, convert the test to pgbench -f and post results
>>> 2. The test is in fact i/o bound. Scaling is going to be
>>> hardware/kernel determined. Can we see
>>> iostat/vmstat/top snipped
>>> during test run? Maybe no-op is burning you?
>>
>> This is during my 80 clients test, this is a point at which the performance is well below that of the same machine limited to 8 cores.
>>
>> http://www.privatepaste.com/dc131ff26e
>>
>>> 3. Locking/concurrency issue in heavy_seat_function()
>>> (source for
>>> that?) how much writing does it do?
>>>
>>
>> No writing afaik - its a select with a few joins and subqueries - I'm pretty sure it's not writing out temp data either, but all clients are after the same data in the test - maybe theres some locks there?
>>
>>> Can we see some iobound and cpubound pgbench runs on both
>>> servers?
>>>
>>
>> Of course, I'll post when I've gotten to that.
>
> Ok, there's no writing going on -- so the i/o tets aren't necessary.
> Context switches are also not too high -- the problem is likely in
> postgres or on your end.
>
> However, I Would still like to see:
> pgbench select only tests:
> pgbench -i -s 1
> pgbench -S -c 8 -t 500
> pgbench -S -c 32 -t 500
> pgbench -S -c 80 -t 500
>
> pgbench -i -s 500
> pgbench -S -c 8 -t 500
> pgbench -S -c 32 -t 500
> pgbench -S -c 80 -t 500
>
> write out bench.sql with:
> begin;
> select * from heavy_seat_function();
> select * from heavy_seat_function();
> commit;
>
> pgbench -n bench.sql -c 8 -t 500
> pgbench -n bench.sql -c 8 -t 500
> pgbench -n bench.sql -c 8 -t 500
whoops:
pgbench -n bench.sql -c 8 -t 500
pgbench -n bench.sql -c 32 -t 500
pgbench -n bench.sql -c 80 -t 500
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2011-04-12 14:32:46 | Re: Two servers - One Replicated - Same query |
Previous Message | Merlin Moncure | 2011-04-12 13:23:21 | Re: Linux: more cores = less concurrency. |