From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
Cc: | "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Global shared meta cache |
Date: | 2018-07-05 14:00:13 |
Message-ID: | CA+TgmoZdbw8jVdd4XuwFr8Tn6HSnTA=gk0tvirhe9MGnCZQoXw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jul 2, 2018 at 5:59 AM, Konstantin Knizhnik
<k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
> But I am not sure that just using RW lock will be enough replace local cache
> with global.
I'm pretty sure it won't. In fact, no matter what kind of locking you
use, it's bound to cost something. There is no such thing as a free
lunch. It does not seem realistic to me to suppose that we're going
to just get rid of all of our backend-private caches and replace them
with a shared cache and somehow there will be no performance
regression. Maybe someone will come up with something that is
surprisingly awesome, but I doubt it.
I think we need to take a little bit broader view of this problem.
For instance, maybe we could have backend-local caches that are kept
relatively small, and then a larger shared cache that can hold more
entries. There are code comments someplace that say that
CLOBBER_CACHE_ALWAYS is about 100x slower than a regular build, and
CLOBBER_CACHE_ALWAYS is about 10000x slower. Obviously, loading an
entry into our backend-private cache must be a LOT slower than
consulting one. If the shared cache is only, say, 3x slower than the
backend-private cache, then we might be able to get away with having
only the really-frequently-accessed stuff in the actual private cache
and the somewhat-frequently-accessed stuff in the shared cache. Now
that's probably still going to cost something, but maybe we can make
that "something" very small in the cases people are actually likely to
hit. I would guess that we'd want to try to use something like the
st_changecount protocol to make reads very cheap and writes
comparatively more expensive, since by and large cache invalidations
aren't that frequent.
Another approach would be to consider whether we're caching too much
useless junk in the first place. For instance, maybe there's some
stuff in the relcache that takes up a lot of memory compared to how
much of a performance boost it produces, or maybe there's some stuff
that could be represented more compactly.
One experiment I think would be interesting is to study how much
catcache traffic we're actually generating and see if we can find any
way to reduce it. For instance, somebody could write code to record
the file and line number for every catcache lookup and then run a test
workload (or several test workloads). That might give us some ideas
about stuff we could just decide not to cache, especially if we also
knew how large each cache ended up being.
With respect to partitioning specifically, it seems like we might be
able to come up with some way of planning that doesn't need a full
relcache entry for every partition, particularly if there are no
partition-local objects (indexes, triggers, etc.). But that seems
like a hard refactoring, and even if we did it, what about execution
time? So much code expects to be handed a Relation. Still, I have a
suspicion that there might be some way to do better here with enough
work.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2018-07-05 14:02:05 | Re: documentation about explicit locking |
Previous Message | Stephen Frost | 2018-07-05 13:50:49 | Re: Cache invalidation after authentication (on-the-fly role creation) |