From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Greg Stark <stark(at)mit(dot)edu> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Dagfinn Ilmari Mannsåker <ilmari(at)ilmari(dot)org>, Christoph Berg <myon(at)debian(dot)org>, mikael(dot)kjellstrom(at)gmail(dot)com, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Direct I/O |
Date: | 2023-04-19 14:24:59 |
Message-ID: | f73117ef-efad-2a2d-3f9c-205c258ac8ec@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 4/19/23 10:11, Robert Haas wrote:
> On Tue, Apr 18, 2023 at 3:35 PM Greg Stark <stark(at)mit(dot)edu> wrote:
>> Well.... I'm more optimistic... That may not always be impossible.
>> We've already added the ability to add more shared memory after
>> startup. We could implement the ability to add or remove shared buffer
>> segments after startup. And it wouldn't be crazy to imagine a kernel
>> interface that lets us judge whether the kernel memory pressure makes
>> it reasonable for us to take more shared buffers or makes it necessary
>> to release shared memory to the kernel.
>
> On this point specifically, one fairly large problem that we have
> currently is that our buffer replacement algorithm is terrible. In
> workloads I've examined, either almost all buffers end up with a usage
> count of 5 or almost all buffers end up with a usage count of 0 or 1.
> Either way, we lose all or nearly all information about which buffers
> are actually hot, and we are not especially unlikely to evict some
> extremely hot buffer.
That has been my experience as well, although admittedly I have not
looked in quite a while.
> I'm not saying that it isn't possible to fix this. I bet it is, and I
> hope someone does.
I keep looking at this blog post about Transparent Memory Offloading and
thinking that we could learn from it:
Unfortunately, it is very Linux specific and requires a really up to
date OS -- cgroup v2, kernel >= 5.19
> I'm just making the point that even if we knew the amount of kernel
> memory pressure and even if we also had the ability to add and remove
> shared_buffers at will, it probably wouldn't help much as things
> stand today, because we're not in a good position to judge how large
> the cache would need to be in order to be useful, or what we ought to
> be storing in it.
The tactic TMO uses is basically to tune the available memory to get a
target memory pressure. That seems like it could work.
--
Joe Conway
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Conway | 2023-04-19 14:40:11 | Re: check_strxfrm_bug() |
Previous Message | Robert Haas | 2023-04-19 14:11:32 | Re: Direct I/O |