From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: reducing memory usage by using "proxy" memory contexts? |
Date: | 2019-12-17 02:26:48 |
Message-ID: | 20191217022648.jxkm54nxn64r4fbl@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2019-12-17 01:12:43 +0100, Tomas Vondra wrote:
> On Mon, Dec 16, 2019 at 03:35:12PM -0800, Andres Freund wrote:
> > But what if we had a new type of memory context that did not itself
> > manage memory underlying allocations, but instead did so via the parent?
> > If such a context tracked all the live allocations in some form of list,
> > it could then free them from the parent at reset time. In other words,
> > it'd proxy all memory management via the parent, only adding a separate
> > name, and tracking of all live chunks.
> >
> > Obviously such a context would be less efficient to reset than a plain
> > aset.c one - but I don't think that'd matter much for these types of
> > use-cases. The big advantage in this case would be that we wouldn't
> > have separate two separate "blocks" for each index cache entry, but
> > instead allocations could all be done within CacheMemoryContext.
> >
> > Does that sound like a sensible idea?
> >
>
> I do think it's an interesting idea, worth exploring.
>
> I agree it's probably OK if the proxy contexts are a bit less efficient,
> but I think we can restrict their use to places where that's not an
> issue (i.e. low frequency of resets, small number of allocated chunks
> etc.). And if needed we can probably find ways to improve the efficiency
> e.g. by replacing the linked list with a small hash table or something
> (to speed-up pfree etc.). Or something.
I don't think you'd need a hash table for efficiency - I was thinking of
just using a doubly linked list. That allows O(1) unlinking.
> I think the big question is what this would mean for the parent context.
> Because suddenly it's a mix of chunks with different life spans, which
> would originally be segregared in different malloc-ed blocks. And now
> that would not be true, so e.g. after deleting the child context the
> memory would not be freed but just moved to the freelist.
I think in the case of CacheMemoryContext it'd not really be a large
change - we already have vastly different lifetimes there, e.g. for the
relcache entries themselves. I could also see using something like this
for some of the executor sub-contexts - they commonly have only very few
allocations, but need to be resettable individually.
> It would also confuse MemoryContextStats, which would suddenly not
> realize some of the chunks are actually "owned" by the child context.
> Maybe this could be improved, but only partially (unless we'd want to
> have a per-chunk flag if it's owned by the context or by a proxy).
I'm not sure it'd really be worth fixing this fully, tbh. Maybe just
reporting at MemoryContextStats time whether a sub-context is included
in the parent's total or not.
> Not sure if this would impact accounting (e.g. what if someone creates a
> custom aggregate, creating a separate proxy context per group?). Would
> that work or not?
I'm not sure what problem you're thinking of?
> Also, would this need to support nested proxy contexts? That might
> complicate things quite a bit, I'm afraid.
I mean, it'd probably not be a great idea to do so much - due to
increased overhead - but I don't see why it wouldn't work. If it
actually is something that we'd want to make work efficiently at some
point, it shouldn't be too hard to have code to walk up the chain of
parent contexts at creation time to the next context that's not a proxy.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2019-12-17 02:46:03 | Re: non-exclusive backup cleanup is mildly broken |
Previous Message | Andres Freund | 2019-12-17 02:18:23 | Re: reducing memory usage by using "proxy" memory contexts? |