From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speed up Clog Access by increasing CLOG buffers |
Date: | 2016-09-14 15:29:48 |
Message-ID: | CA+TgmoYO1wNi7dRwNxGUupmMPhOkae-pmU38ndC1s6FDQ-USJg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 14, 2016 at 12:55 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> 2. Results
> ./pgbench -c $threads -j $threads -T 10 -M prepared postgres -f script.sql
> scale factor: 300
> Clients head(tps) grouplock(tps) granular(tps)
> ------- --------- ---------- -------
> 128 29367 39326 37421
> 180 29777 37810 36469
> 256 28523 37418 35882
>
>
> grouplock --> 1) Group mode to reduce CLOGControlLock contention
> granular --> 2) Use granular locking model
>
> I will test with 3rd approach also, whenever I get time.
>
> 3. Summary:
> 1. I can see on head we are gaining almost ~30 % performance at higher
> client count (128 and beyond).
> 2. group lock is ~5% better compared to granular lock.
Sure, but you're testing at *really* high client counts here. Almost
nobody is going to benefit from a 5% improvement at 256 clients. You
need to test 64 clients and 32 clients and 16 clients and 8 clients
and see what happens there. Those cases are a lot more likely than
these stratospheric client counts.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2016-09-14 15:32:12 | Re: palloc() too large on pg_buffercache with large shared_buffers |
Previous Message | Claudio Freire | 2016-09-14 15:24:13 | Re: Vacuum: allow usage of more than 1GB of work mem |