From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Using per-transaction memory contexts for storing decoded tuples |
Date: | 2024-09-23 04:28:49 |
Message-ID: | CAA4eK1+iSNExkdhZSg72BWh6u0CuLaxpiXyrjSPFe2Cgy9fymQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Sep 22, 2024 at 11:27 AM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
>
> On Fri, 20 Sept 2024 at 17:46, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> > > In general, it's a bit annoying to have to code around this
> > > GenerationContext fragmentation issue.
> >
> > Right, and I am also slightly afraid that this may not cause some
> > regression in other cases where defrag wouldn't help.
>
> Yeah, that's certainly a possibility. I was hoping that
> MemoryContextMemAllocated() being much larger than logical_work_mem
> could only happen when there is fragmentation, but certainly, you
> could be wasting effort trying to defrag transactions where the
> changes all arrive in WAL consecutively and there is no
> defragmentation. It might be some other large transaction that's
> causing the context's allocations to be fragmented. I don't have any
> good ideas on how to avoid wasting effort on non-problematic
> transactions. Maybe there's something that could be done if we knew
> the LSN of the first and last change and the gap between the LSNs was
> much larger than the WAL space used for this transaction. That would
> likely require tracking way more stuff than we do now, however.
>
With more information tracking, we could avoid some non-problematic
transactions but still, it would be difficult to predict that we
didn't harm many cases because to make the memory non-contiguous, we
only need a few interleaving small transactions. We can try to think
of ideas for implementing defragmentation in our code if we first can
prove that smaller block sizes cause problems.
> With the smaller blocks idea, I'm a bit concerned that using smaller
> blocks could cause regressions on systems that are better at releasing
> memory back to the OS after free() as no doubt malloc() would often be
> slower on those systems. There have been some complaints recently
> about glibc being a bit too happy to keep hold of memory after free()
> and I wondered if that was the reason why the small block test does
> not cause much of a performance regression. I wonder how the small
> block test would look on Mac, FreeBSD or Windows. I think it would be
> risky to assume that all is well with reducing the block size after
> testing on a single platform.
>
Good point. We need extensive testing on different platforms, as you
suggest, to verify if smaller block sizes caused any regressions.
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2024-09-23 04:30:28 | Re: Using per-transaction memory contexts for storing decoded tuples |
Previous Message | jian he | 2024-09-23 03:59:45 | Re: Statistics Import and Export |