From: | "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com> |
---|---|
To: | 'Kyotaro HORIGUCHI' <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> |
Cc: | "alvherre(at)2ndquadrant(dot)com" <alvherre(at)2ndquadrant(dot)com>, "tomas(dot)vondra(at)2ndquadrant(dot)com" <tomas(dot)vondra(at)2ndquadrant(dot)com>, "bruce(at)momjian(dot)us" <bruce(at)momjian(dot)us>, "andres(at)anarazel(dot)de" <andres(at)anarazel(dot)de>, "robertmhaas(at)gmail(dot)com" <robertmhaas(at)gmail(dot)com>, "tgl(at)sss(dot)pgh(dot)pa(dot)us" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "michael(dot)paquier(at)gmail(dot)com" <michael(dot)paquier(at)gmail(dot)com>, "david(at)pgmasters(dot)net" <david(at)pgmasters(dot)net>, "craig(at)2ndquadrant(dot)com" <craig(at)2ndquadrant(dot)com>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>, "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com> |
Subject: | RE: Protect syscache from bloating with negative cache entries |
Date: | 2019-02-15 12:31:52 |
Message-ID: | 4E72940DA2BF16479384A86D54D0988A6F424317@G01JPEXMBKW04 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>From: Ideriha, Takeshi [mailto:ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com]
>>About the new global-size based evicition(2), cache entry creation
>>becomes slow after the total size reached to the limit since every one
>>new entry evicts one or more old (=
>>not-recently-used) entries. Because of not needing knbos for each
>>cache, it become far realistic. So I added documentation of
>"catalog_cache_max_size" in 0005.
>
>Now I'm also trying to benchmark, which will be posted in another email.
According to recent comments by Andres and Bruce
maybe we should address negative cache bloat step by step
for example by reviewing Tom's patch.
But at the same time, I did some benchmark with only hard limit option enabled
and time-related option disabled, because the figures of this case are not provided in this thread.
So let me share it.
I did two experiments. One is to show negative cache bloat is suppressed.
This thread originated from the issue that negative cache of pg_statistics
is bloating as creating and dropping temp table is repeatedly executed.
https://www.postgresql.org/message-id/20161219.201505.11562604.horiguchi.kyotaro%40lab.ntt.co.jp
Using the script attached the first email in this thread, I repeated create and drop temp table at 10000 times.
(experiment is repeated 5 times. catalog_cache_max_size = 500kB.
compared master branch and patch with hard memory limit)
Here are TPS and CacheMemoryContext 'used' memory (total - freespace) calculated by MemoryContextPrintStats()
at 100, 1000, 10000 times of create-and-drop transaction. The result shows cache bloating is suppressed
after exceeding the limit (at 10000) but tps declines regardless of the limit.
number of tx (create and drop) | 100 |1000 |10000
-----------------------------------------------------------
used CacheMemoryContext (master) |610296|2029256 |15909024
used CacheMemoryContext (patch) |755176|880552 |880592
-----------------------------------------------------------
TPS (master) |414 |407 |399
TPS (patch) |242 |225 |220
Another experiment is using Tomas's script posted while ago,
The scenario is do select 1 from multiple tables randomly (uniform distribution).
(experiment is repeated 5 times. catalog_cache_max_size = 10MB.
compared master branch and patch with only hard memory limit enabled)
Before doing the benchmark, I checked pruning is happened only at 10000 tables
using debug option. The result shows degradation regardless of before or after pruning.
I personally still need hard size limitation but I'm surprised that the difference is so significant.
number of tables | 100 |1000 |10000
-----------------------------------------------------------
TPS (master) |10966 |10654 |9099
TPS (patch) |4491 |2099 |378
Regards,
Takeshi Ideriha
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2019-02-15 12:42:52 | Re: libpq compression |
Previous Message | Etsuro Fujita | 2019-02-15 12:19:30 | Re: Problems with plan estimates in postgres_fdw |