| From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
|---|---|
| To: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
| Cc: | PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: checkpointer continuous flushing |
| Date: | 2016-03-22 09:48:20 |
| Message-ID: | b1a3c958-a2b6-bda5-e80c-0aed3c129654@2ndquadrant.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi,
On 03/22/2016 10:44 AM, Fabien COELHO wrote:
>
>
>>>> 1) regular-latency.png
>>>
>>> I'm wondering whether it would be clearer if the percentiles
>>> where relative to the largest sample, not to itself, so that the
>>> figures from the largest one would still be between 0 and 1, but
>>> the other (unpatched) one would go between 0 and 0.85, that is
>>> would be cut short proportionnaly to the actual performance.
>>
>> I'm not sure what you mean by 'relative to largest sample'?
>
> You took 5% of the tx on two 12 hours runs, totaling say 85M tx on
> one and 100M tx on the other, so you get 4.25M tx from the first and
> 5M from the second.
OK
> I'm saying that the percentile should be computed on the largest one
> (5M), so that you get a curve like the following, with both curve
> having the same transaction density on the y axis, so the second one
> does not go up to the top, reflecting that in this case less
> transactions where processed.
Huh, that seems weird. That's not how percentiles or CDFs work, and I
don't quite understand what would that tell us.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alexander Korotkov | 2016-03-22 09:50:07 | Re: WIP: Access method extendability |
| Previous Message | Fabien COELHO | 2016-03-22 09:44:54 | Re: checkpointer continuous flushing |