From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> |
Cc: | Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "andres(at)anarazel(dot)de" <andres(at)anarazel(dot)de>, "alvherre(at)2ndquadrant(dot)com" <alvherre(at)2ndquadrant(dot)com>, "tomas(dot)vondra(at)2ndquadrant(dot)com" <tomas(dot)vondra(at)2ndquadrant(dot)com>, "bruce(at)momjian(dot)us" <bruce(at)momjian(dot)us>, "tgl(at)sss(dot)pgh(dot)pa(dot)us" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "michael(dot)paquier(at)gmail(dot)com" <michael(dot)paquier(at)gmail(dot)com>, "david(at)pgmasters(dot)net" <david(at)pgmasters(dot)net>, "craig(at)2ndquadrant(dot)com" <craig(at)2ndquadrant(dot)com> |
Subject: | Re: Protect syscache from bloating with negative cache entries |
Date: | 2019-02-26 15:46:31 |
Message-ID: | CA+TgmoaJPmpTsD3CZ4R7A6e8SiXRNJ=gV1fNOp_POPns8Ld1hw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Feb 25, 2019 at 3:50 AM Tsunakawa, Takayuki
<tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> wrote:
> How can I make sure that this context won't exceed, say, 10 MB to avoid OOM?
As Tom has said before and will probably say again, I don't think you
actually want that. We know that PostgreSQL gets roughly 100x slower
with the system caches disabled - try running with
CLOBBER_CACHE_ALWAYS. If you are accessing the same system cache
entries repeatedly in a loop - which is not at all an unlikely
scenario, just run the same query or sequence of queries in a loop -
and if the number of entries exceeds 10MB even, perhaps especially, by
just a tiny bit, you are going to see a massive performance hit.
Maybe it won't be 100x because some more-commonly-used entries will
always stay cached, but it's going to be really big, I think.
Now you could say - well it's still better than running out of memory.
However, memory usage is quite unpredictable. It depends on how many
backends are active and how many copies of work_mem and/or
maintenance_work_mem are in use, among other things. I don't think we
can say that just imposing a limit on the size of the system caches is
going to be enough to reliably prevent an out of memory condition
unless the other use of memory on the machine happens to be extremely
stable.
So I think what's going to happen if you try to impose a hard-limit on
the size of the system cache is that you will cause some workloads to
slow down by 3x or more without actually preventing out of memory
conditions. What you need to do is accept that system caches need to
grow as big as they need to grow, and if that causes you to run out of
memory, either buy more memory or reduce the number of concurrent
sessions you allow. It would be fine to instead limit the cache
memory if those cache entries only had a mild effect on performance,
but I don't think that's the case.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2019-02-26 15:55:18 | Re: Protect syscache from bloating with negative cache entries |
Previous Message | Tom Lane | 2019-02-26 15:46:29 | Re: No-rewrite timestamp<->timestamptz conversions |