From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Scaling shared buffer eviction |
Date: | 2014-09-25 14:02:25 |
Message-ID: | CAHyXU0xRwOK9kvPsuLwS93vqP3hvw3a3y9bizuwYiBhXEUOLiQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> 1. To see the effect of reduce-replacement-locking.patch, compare the
> first TPS number in each line to the third, or the second to the
> fourth. At scale factor 1000, the patch wins in all of the cases with
> 32 or more clients and exactly half of the cases with 1, 8, or 16
> clients. The variations at low client counts are quite small, and the
> patch isn't expected to do much at low concurrency levels, so that's
> probably just random variation. At scale factor 3000, the situation
> is more complicated. With only 16 bufmappinglocks, the patch gets its
> biggest win at 48 clients, and by 96 clients it's actually losing to
> unpatched master. But with 128 bufmappinglocks, it wins - often
> massively - on everything but the single-client test, which is a small
> loss, hopefully within experimental variation.
>
> Comments?
Why stop at 128 mapping locks? Theoretical downsides to having more
mapping locks have been mentioned a few times but has this ever been
measured? I'm starting to wonder if the # mapping locks should be
dependent on some other value, perhaps the # of shared bufffers...
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2014-09-25 14:08:36 | Re: pgcrypto: PGP armor headers |
Previous Message | Andres Freund | 2014-09-25 14:01:31 | Re: missing isinf declaration on solaris |