From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Relation extension scalability |
Date: | 2015-12-31 11:22:50 |
Message-ID: | CAFiTN-sn1HLEOGWm5-U9xmzNJDdXpAMHYeWh3i-u-pP5rcFROQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Dec 18, 2015 at 10:51 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> On Sun, Jul 19 2015 9:37 PM Andres Wrote,
>
> > The situation the read() protect us against is that two backends try to
> > extend to the same block, but after one of them succeeded the buffer is
> > written out and reused for an independent page. So there is no in-memory
> > state telling the slower backend that that page has already been used.
>
> I was looking into this patch, and done some performance testing..
>
> Currently i have done testing my my local machine, later i will perform on
> big machine once i get access to that.
>
> Just wanted to share the current result what i get i my local machine
> Machine conf (Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz, 8 core and 16GM
> of RAM).
>
> Test Script:
> ./psql -d postgres -c "COPY (select g.i::text FROM generate_series(1,
> 10000) g(i)) TO '/tmp/copybinarywide' WITH BINARY";
>
> ./psql -d postgres -c "truncate table data"
> ./psql -d postgres -c "checkpoint"
> ./pgbench -f copy_script -T 120 -c$ -j$ postgres
>
This time i have done some testing on big machine with* 64 physical cores @
2.13GHz and 50GB of RAM*
There is performance comparison of base, extend without
RelationExtensionLock patch given by Andres and
multi-extend patch (this will extend the multiple blocks at a time based on
a configuration parameter.)
*Problem Analysis:------------------------*
1. With base code when i try to observe the problem using perf and other
method (gdb), i found that RelationExtensionLock is main bottleneck.
2. Then after using RelationExtensionLock Free patch, i observed now
contention is FileWrite (All backends are trying to extend the file.)
*Performance Summary and
Analysis:------------------------------------------------*
1. In my performance results Multi Extend shown best performance and
scalability.
2. I think by extending in multiple blocks we solves both the
problem(Extension Lock and Parallel File Write).
3. After extending one Block immediately adding to FSM so in most of the
cases other backend can directly find it without taking extension lock.
Currently the patch is in initial stage, i have done only test performance
and pass the regress test suit.
*Open problems -----------------------------*
1. After extending the page we are adding it directly to FSM, so if vacuum
find this page as new it will give WARNING.
2. In RelationGetBufferForTuple, when PageIsNew, we are doing PageInit,
same need to be consider for index cases.
*Test Script:-------------------------*
./psql -d postgres -c "COPY (select g.i::text FROM generate_series(1,
10000) g(i)) TO '/tmp/copybinarywide' WITH BINARY";
./psql -d postgres -c "truncate table data"
./psql -d postgres -c "checkpoint"
./pgbench -f copy_script -T 120 -c$ -j$ postgres
*Performanec Data:*
--------------------------
*There are Three code base for performance*
1. Base Code
2. Lock Free Patch : patch given in below thread
*http://www.postgresql.org/message-id/20150719140746.GH25610@awork2.anarazel.de
<http://www.postgresql.org/message-id/20150719140746.GH25610@awork2.anarazel.de>*
3. Multi extend patch attached in the mail.
*#extend_num_pages : *This this new config parameter to tell how many extra
page extend in case of normal extend..
may be it will give more control to user if we make it relation property.
I will work on the patch for this CF, so adding it to CF.
*Shared Buffer 48 GB*
*Clients* *Base (TPS)*
*Lock Free Patch* *Multi-extend **extend_num_pages=5* 1 142 138 148 2 251
253 280 4 237 416 464 8 168 491 575 16 141 448 404 32 122 337 332
*Shared Buffer 64 MB*
*Clients* *Base (TPS)* *Multi-extend **extend_num_pages=5*
1 140 148
2 252 266
4 229 437
8 153 475
16 132 364
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
multi_extend_v1.patch | text/x-patch | 5.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2015-12-31 11:36:00 | Re: [PATCH] Refactoring of LWLock tranches |
Previous Message | Pavel Stehule | 2015-12-31 10:37:59 | Re: proposal: PL/Pythonu - function ereport |