From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: hint bit cache v5 |
Date: | 2011-05-11 15:38:41 |
Message-ID: | BANLkTi=7mqJv10kf9j_qbfxLiYUWdpt8eA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, May 10, 2011 at 11:59 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On Mon, May 9, 2011 at 5:12 PM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>
>> I'd like to know if this is a strategy that merits further work...If
>> anybody has time/interest that is. It's getting close to the point
>> where I can just post it to the commit fest for review. In
>> particular, I'm concerned if Tom's earlier objections can be
>> satisfied. If not, it's back to the drawing board...
>
> I'm interested in what you're doing here.
>
> From here, there's quite a lot of tuning possibilities. It would be
> very useful to be able to define some metrics we are interested in
> reducing and working out how to measure them.
Following are results that are fairly typical of the benefits you
might see when the optimization kicks in. The attached benchmark just
creates a bunch of records in a random table and scans it. This is
more or less the scenario that causes people to grip about hint bit
i/o, especially in systems that are already under moderate to heavy
i/o stress. I'm gonna call it for 20%, although it could be less if
you have an i/o system that spanks the test (try cranking -c and the
creation # records in bench.sql in that case). Anecdotal reports of
extreme duress caused by hint bit i/o suggest problematic or mixed use
(OLTP + OLAP) workloads might see even more benefit. One thing I need
to test is how much benefit you'll see with wider records.
I think I'm gonna revert the change to cache invalid bits. I just
don't see hint bits as a major contributor to dead tuples following
epic rollbacks (really, the solution for that case is simply to try
and not get in that scenario if you can). This will put the code back
into the cheaper and simpler bit per transaction addressing. What I
do plan to do though, is to check and set xmax commit bits in the
cache...that way deleted tuples will see cache benefits.
[hbcache]
merlin(at)mmoncure-ubuntu:~$ time pgbench -c 4 -n -T 200 -f bench.sql
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
duration: 200 s
number of transactions actually processed: 8
tps = 0.037167 (including connections establishing)
tps = 0.037171 (excluding connections establishing)
real 3m35.549s
user 0m0.008s
sys 0m0.004s
[HEAD]
merlin(at)mmoncure-ubuntu:~$ time pgbench -c 4 -n -T 200 -f bench.sql
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
duration: 200 s
number of transactions actually processed: 8
tps = 0.030313 (including connections establishing)
tps = 0.030317 (excluding connections establishing)
real 4m24.216s
user 0m0.000s
sys 0m0.012s
Attachment | Content-Type | Size |
---|---|---|
bench.sql | application/octet-stream | 100 bytes |
bench_setup.sql | application/octet-stream | 725 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Joseph Adams | 2011-05-11 15:43:29 | Re: VARIANT / ANYTYPE datatype |
Previous Message | Tom Lane | 2011-05-11 15:14:18 | Re: PGC_S_DEFAULT is inadequate |