From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Turning off HOT/Cleanup sometimes |
Date: | 2015-04-20 08:04:18 |
Message-ID: | CAMkU=1z4A46pEbXUjhqc+NYZ1nRyg59iDz28DnmiRDpav+hcAQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Sep 29, 2014 at 2:13 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> On 2014-09-28 19:51:36 +0100, Simon Riggs wrote:
> > On 27 September 2014 09:29, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > > On 2014-09-27 10:23:33 +0300, Heikki Linnakangas wrote:
> > >> This patch has gotten a fair amount of review, and has been
rewritten once
> > >> during the commitfest. I think it's pretty close to being
committable, the
> > >> only remaining question seems to be what to do with system catalogs.
I'm
> > >> marking this as "Returned with feedback", I take it that Simon can
proceed
> > >> from here, outside the commitfest.
> > >
> > > FWIW, I don't think it is, even with that. As is it seems very likely
> > > that it's going to regress a fair share of workloads. At the very
least
> > > it needs a fair amount of benchmarking beforehand.
> >
> > There is some doubt there. We've not seen a workload that does
> > actually exhibit a negative behaviour.
>
> Neither is there much data about the magnitude of positive effect the
> patch has...
>
> > I'm not saying one doesn't exist, but it does matter how common/likely
> > it is. If anyone can present a performance test case that demonstrates
> > a regression, I think it will make it easier to discuss how wide that
> > case is and what we should do about it. Discussing whether to do
> > various kinds of limited pruning are moot until that is clear.
>
> I doubt it'll be hard to construct a case where it'll show. My first try
> of using a pgbench scale 100, -M prepared, -cj8 with a custom file with
> 1 write and 5 read transaction yielded the following on my laptop:
>
> Baseline:
> relname | pgbench_tellers
> pg_total_relation_size | 458752
> relname | pgbench_accounts
> pg_total_relation_size | 1590337536
> relname | pgbench_branches
> pg_total_relation_size | 286720
> relname | pgbench_history
> pg_total_relation_size | 49979392
> Patched:
> relname | pgbench_tellers
> pg_total_relation_size | 516096
> relname | pgbench_accounts
> pg_total_relation_size | 1590337536
> relname | pgbench_branches
> pg_total_relation_size | 360448
> relname | pgbench_history
> pg_total_relation_size | 49528832
>
> So, there's a noticeable increase in size. Mostly on the smaller tables,
> so probably HOT cleanup was sometimes skipped during UPDATEs due to
> locks.
>
> Baseline was:
> tps = 9655.486532 (excluding connections establishing)
> Patched was:
> tps = 9466.158701 (including connections establishing)
Was this reproducible? I've run your custom sql file with 4 clients (that
is how many CPUs I have) on a machine
with a BBU. I had wal_level = hot_standby, but the archive_command just
returned true without archiving anything. And using the latest patch.
The size of the pgbench_tellers and pgbench_branches relations were
surprisingly variable in both patched and unpatched, but there was no
reliable difference between them, just within them.
On the TPS front, there was a hint that patched one was slightly slower but
the within sample variation was also high, and the p-val for difference was
only 0.214 on n of 66.
test case attached.
> That's not a unrealistic testcase.
>
> I'm pretty sure this could be made quite a bit more pronounced by not
> using a uniform distribution in the pgbench runs. And selecting a test
> that's more vulnerable to the change (e.g. using a wider distribution
> for the read only statements than the modifying ones) would make the the
> CPU overhead of the additional heap_hot_search_buffer() overhead
> heavier.
Sorry I don't understand this description. Why would queries selecting
data that is not changing have any extra overhead?
Is the idea that the hot part of the table for updates would move around
over time, but the hot part for selects would be even throughout? I'm not
sure how to put that to the test.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2015-04-20 08:05:47 | Re: Replication identifiers, take 4 |
Previous Message | Sawada Masahiko | 2015-04-20 07:45:34 | Re: Freeze avoidance of very large table. |