From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Protect syscache from bloating with negative cache entries |
Date: | 2018-03-01 20:01:29 |
Message-ID: | 20180301200129.tuaq727p3xtqohx6@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2018-03-01 14:49:26 -0500, Robert Haas wrote:
> On Thu, Mar 1, 2018 at 2:29 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > Right. Which might be very painful latency wise. And with poolers it's
> > pretty easy to get into situations like that, without the app
> > influencing it.
>
> Really? I'm not sure I believe that. You're talking perhaps a few
> milliseconds - maybe less - of additional latency on a connection
> that's been idle for many minutes.
I've seen latency increases in second+ ranges due to empty cat/sys/rel
caches. And the connection doesn't have to be idle, it might just have
been active for a different application doing different things, thus
accessing different cache entries. With a pooler you can trivially end
up switch connections occasionally between different [parts of]
applications, and you don't want performance to suck after each time.
You also don't want to use up all memory, I entirely agree on that.
> Anyway, I don't mind making the exact timeout a GUC (with 0 disabling
> the feature altogether) if that addresses your concern, but in general
> I think that it's reasonable to accept that a connection that's been
> idle for a long time may have a little bit more latency than usual
> when you start using it again.
I don't think that'd quite address my concern. I just don't think that
the granularity (drop all entries older than xxx sec at the next resize)
is right. For one I don't want to drop stuff if the cache size isn't a
problem for the current memory budget. For another, I'm not convinced
that dropping entries from the current "generation" at resize won't end
up throwing away too much.
If we'd a guc 'syscache_memory_target' and we'd only start pruning if
above it, I'd be much happier.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2018-03-01 20:02:20 | Re: [PATCH] Opclass parameters |
Previous Message | Peter Geoghegan | 2018-03-01 20:01:24 | Re: "failed to find parent tuple for heap-only tuple" error as an ERRCODE_DATA_CORRUPTION ereport() |