Re: Lowering the minimum value for maintenance_work_mem

From: John Naylor <johncnaylorls(at)gmail(dot)com>
To: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org, John Naylor <jcnaylor(at)gmail(dot)com>
Subject: Re: Lowering the minimum value for maintenance_work_mem
Date: 2024-05-20 06:05:32
Message-ID: CANWCAZZUTvZ3LsYpauYQVzcEZXZ7Qe+9ntnHgYZDTWxPuL++zA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, May 20, 2024 at 11:59 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>
> On Fri, May 17, 2024 at 5:55 AM Andres Freund <andres(at)anarazel(dot)de> wrote:

> > I think we should consider lowering the minimum setting of
> > maintenance_work_mem to the minimum of work_mem.
>
> +1 for lowering the minimum value of maintenance_work_mem. I've faced
> the same situation.
>
> Even if a shared tidstore is empty, TidStoreMemoryUsage() returns
> 256kB because it's the minimum segment size of DSA, i.e.
> DSA_MIN_SEGMENT_SIZE. So we can lower the minimum maintenance_work_mem
> down to 256kB, from a vacuum perspective.

I've verified 256kB works with both local and shared memory with the
below commands, and 200k records are enough to cause a second round of
index cleanup. I don't think we can go much smaller than that without
changing how we size the blocks in the node slab contexts (or when
they're created), which is currently somewhat arbitrary. That'll need
some thought, at least when we get a use case with work_mem as the
limit.

set maintenance_work_mem = '256kB';

drop table if exists test;
create unlogged table test (a int) with (autovacuum_enabled=false);
insert into test (a) select i from generate_series(1,200_000) i;
create index on test (a);
--create index on test (a); -- toggle for parallel vacuum

delete from test;
vacuum (verbose) test;

Side note: I'm confused why shared memory works at all in this case,
since it failed for 1MB init segments until we allowed callers to
specify a smaller init size. The overhead for DSA seems to be
significant for small sizes, as evidenced from the amount of usable
memory:

shared:
INFO: finished vacuuming "john.public.test": index scans: 56

local:
INFO: finished vacuuming "john.public.test": index scans: 2

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message John Naylor 2024-05-20 06:23:02 Re: First draft of PG 17 release notes
Previous Message Thomas Munro 2024-05-20 05:49:40 Re: commitfest.postgresql.org is no longer fit for purpose