From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Relation extension scalability |
Date: | 2016-02-29 10:07:44 |
Message-ID: | CAFiTN-s-YdPoixtSEfKNpv0QMgJ_7fVyES143tyx5s58d=byPA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Feb 10, 2016 at 7:06 PM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>
I have tested Relation extension patch from various aspects and performance
results and other statistical data are explained in the mail.
Test 1: Identify the Heavy Weight lock is the Problem or the Actual Context
Switch
1. I converted the RelationExtensionLock to simple LWLock and tested with
single Relation. Results are as below
This is the simple script of copy 10000 record in one transaction of size 4
Bytes
client base lwlock multi_extend by 50 block
1 155 156 160
2 282 276 284
4 248 319 428
8 161 267 675
16 143 241 889
LWLock performance is better than base, obvious reason may be because we
have saved some instructions by converting to LWLock but it don't scales
any better compared to base code.
Test2: Identify that improvement in case of multiextend is becuase of
avoiding context switch or some other factor, like reusing blocks b/w
backend by putting in FSM..
1. Test by just extending multiple blocks and reuse in it's own backend
(Don't put in FSM)
Insert 1K record data don't fits in shared buffer 512MB Shared Buffer
Client Base Extend 800 block self use Extend 1000 Block
1 117 131 118
2 111 203 140
3 51 242 178
4 51 231 190
5 52 259 224
6 51 263 243
7 43 253 254
8 43 240 254
16 40 190 243
We can see the same improvement in case of self using the blocks also, It
shows that Sharing the blocks between the backend was not the WIN but
avoiding context switch was the measure win.
2. Tested the Number of ProcSleep during the Run.
This is the simple script of copy 10000 record in one transaction of size 4
Bytes
* BASE CODE*
*PATCH MULTI EXTEND*
Client Base_TPS ProcSleep Count Extend By 10 Block Proc
Sleep Count
2 280 457,506
311 62,641
3 235 1,098,701
358 141,624
4 216 1,155,735
368 188,173
What we can see in above test that, in Base code performance is degrading
after 2 threads, while Proc Sleep count in increasing with huge amount.
Compared to that in Patch, with extending 10 blocks at a time Proc Sleep
reduce to ~1/8 and we can see it is constantly scaling.
Proc Sleep test for Insert test when data don't fits in shared buffer and
inserting big record of 1024 bytes, is currently running
once I get the data will post the same.
Posting the re-based version and moving to next CF.
Open points:
1. After getting the Lock recheck the FSM if some other back end has
already added extra blocks and reuse them.
2. Is it good idea to have user level parameter for extend_by_block or we
can try some approach to internally identify how many blocks are needed and
as per the need only add the blocks, this will make it more flexible.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
multi_extend_v4.patch | application/x-patch | 4.0 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Rajkumar Raghuwanshi | 2016-02-29 10:19:49 | Issue with NULLS LAST, with postgres_fdw sort pushdown |
Previous Message | Kouhei Kaigai | 2016-02-29 09:07:56 | A trivial fix on extensiblenode |