From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Relation extension scalability |
Date: | 2016-02-02 15:49:13 |
Message-ID: | 20160202154913.GV8743@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2016-01-28 16:53:08 +0530, Dilip Kumar wrote:
> test_script:
> ------------
> ./psql -d postgres -c "truncate table data"
> ./psql -d postgres -c "checkpoint"
> ./pgbench -f copy_script -T 120 -c$ -j$ postgres
>
> Shared Buffer 48GB
> Table: Unlogged Table
> ench -c$ -j$ -f -M Prepared postgres
>
> Clients Base patch
> 1 178 180
> 2 337 338
> 4 265 601
> 8 167 805
Could you also measure how this behaves for an INSERT instead of a COPY
workload? Both throughput and latency. It's quite possible that this
causes latency hikes, because suddenly backends will have to wait for
one other to extend by 50 pages. You'll probably have to use -P 1 or
full statement logging to judge that. I think just having a number of
connections inserting relatively wide rows into one table should do the
trick.
I'm doubtful that anything that does the victim buffer search while
holding the extension lock will actually scale in a wide range of
scenarios. The copy scenario here probably isn't too bad because the
copy ring buffes are in use, and because there's no reads increasing the
usagecount of recent buffers; thus a victim buffers are easily found.
Thanks,
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2016-02-02 16:00:11 | Re: Relation extension scalability |
Previous Message | Magnus Hagander | 2016-02-02 15:46:33 | Re: Add links to commit fests to patch summary page |