Re: Vacuum ERRORs out considering freezing dead tuples from before OldestXmin

From: Melanie Plageman <melanieplageman(at)gmail(dot)com>
To: John Naylor <johncnaylorls(at)gmail(dot)com>
Cc: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Andres Freund <andres(at)anarazel(dot)de>, Noah Misch <noah(at)leadboat(dot)com>
Subject: Re: Vacuum ERRORs out considering freezing dead tuples from before OldestXmin
Date: 2024-07-26 21:04:31
Message-ID: CAAKRu_ZhG5NEQU-h7m=aeocxRze4ALt5swuKM45bN0HRQBccew@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Jul 24, 2024 at 8:19 AM John Naylor <johncnaylorls(at)gmail(dot)com> wrote:
>
> On Wed, Jul 24, 2024 at 2:42 PM John Naylor <johncnaylorls(at)gmail(dot)com> wrote:
> > As for lowering the limit, we've experimented with 256kB here:
> >
> > https://www.postgresql.org/message-id/CANWCAZZUTvZ3LsYpauYQVzcEZXZ7Qe+9ntnHgYZDTWxPuL++zA@mail.gmail.com
> >
> > As I mention there, going lower than that would need a small amount of
> > reorganization in the radix tree. Not difficult -- the thing I'm
> > concerned about is that we'd likely need to document a separate
> > minimum for DSA, since that behaves strangely with 256kB and might not
> > work at all lower than that.
>
> For experimentation, here's a rough patch (really two, squashed
> together for now) that allows m_w_m to go down to 64kB.

Oh, great, thanks! I didn't read this closely enough before I posed my
upthread question about how small we should make the minimum. It
sounds like you've thought a lot about this.

I ran my test with your patch (on my 64-bit system, non-assert build)
and the result is great:

master with my test (slightly modified to now use DELETE instead of
UPDATE as mentioned upthread)
3.09s

master with your patch applied, MWM set to 64kB and 9000 rows instead of 800000
1.06s

> drop table if exists test;
> create table test (a int) with (autovacuum_enabled=false, fillfactor=10);
> insert into test (a) select i from generate_series(1,2000) i;
> create index on test (a);
> update test set a = a + 1;
>
> set maintenance_work_mem = '64kB';
> vacuum (verbose) test;
>
> INFO: vacuuming "john.public.test"
> INFO: finished vacuuming "john.public.test": index scans: 3
> pages: 0 removed, 91 remain, 91 scanned (100.00% of total)
>
> The advantage with this is that we don't need to care about
> MEMORY_CONTEXT_CHECKING or 32/64 bit-ness, since allocating a single
> large node will immediately blow the limit, and that will happen
> fairly quickly regardless. I suspect going this low will not work with
> dynamic shared memory and if so would need a warning comment.

I took a look at the patch, but I can't say I know enough about the
memory allocation subsystems and how TIDStore works to meaningfully
review it -- nor enough about DSM to comment about the interactions.

I suspect 256kB would also be fast enough to avoid my test timing out
on the buildfarm, but it is appealing to have a minimum for
maintenance_work_mem that is the same as work_mem.

- Melanie

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Melanie Plageman 2024-07-26 21:07:59 Re: Vacuum ERRORs out considering freezing dead tuples from before OldestXmin
Previous Message Jeff Davis 2024-07-26 21:00:31 Re: Speed up collation cache