From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | simon(at)2ndquadrant(dot)com, stark(at)mit(dot)edu, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: measuring lwlock-related latency spikes |
Date: | 2012-04-06 04:30:05 |
Message-ID: | CA+TgmoZEPVv-Lrn-mmzFteTv8bx_g5jLfiEJgHYPQN50pnswVA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Apr 3, 2012 at 8:28 AM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> Might as well jump in with both feet:
>
> autovacuum_naptime = 1s
> autovacuum_vacuum_threshold = 1
> autovacuum_vacuum_scale_factor = 0.0
>
> If that smooths the latency peaks and doesn't hurt performance too
> much, it's decent evidence that the more refined technique could be a
> win.
It seems this isn't good for either throughput or latency. Here are
latency percentiles for a recent run against master with my usual
settings:
90 1668
91 1747
92 1845
93 1953
94 2064
95 2176
96 2300
97 2461
98 2739
99 3542
100 12955473
And here's how it came out with these settings:
90 1818
91 1904
92 1998
93 2096
94 2200
95 2316
96 2459
97 2660
98 3032
99 3868
100 10842354
tps came out tps = 13658.330709 (including connections establishing),
vs 14546.644712 on the other run.
I have a (possibly incorrect) feeling that even with these
ridiculously aggressive settings, nearly all of the cleanup work is
getting done by HOT prunes rather than by vacuum, so we're still not
testing what we really want to be testing, but we're doing a lot of
extra work along the way.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Etsuro Fujita | 2012-04-06 05:16:08 | Re: WIP: Collecting statistics on CSV file data |
Previous Message | Alvaro Herrera | 2012-04-06 03:25:39 | Re: Another review of URI for libpq, v7 submission |