From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: maintenance_work_mem = 64kB doesn't work for vacuum |
Date: | 2025-03-10 04:22:21 |
Message-ID: | CAD21AoCvn2CVJfhB=H_+Z7gVeQ733mRk1BOrQfwSiTiU+mQtFA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Mar 9, 2025 at 7:03 PM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
>
> On Mon, 10 Mar 2025 at 10:30, David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> > Could you do something similar to what's in hash_agg_check_limits()
> > where we check we've got at least 1 item before bailing before we've
> > used up the all the prescribed memory? That seems like a safer coding
> > practise as if in the future the minimum usage for a DSM segment goes
> > above 256KB, the bug comes back again.
>
> FWIW, I had something like the attached in mind.
>
Thank you for the patch! I like your idea. This means that even if we
set maintenance_work_mem to 64kB the memory usage would not actually
be limited to 64kB but probably we're fine as it's primarily testing
purpose.
Regarding that patch, we need to note that the lpdead_items is a
counter that is not reset in the entire vacuum. Therefore, with
maintenance_work_mem = 64kB, once we collect at least one lpdead item,
we perform a cycle of index vacuuming and heap vacuuming for every
subsequent block even if they don't have a lpdead item. I think we
should use vacrel->dead_items_info->num_items instead.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Dilip Kumar | 2025-03-10 04:44:58 | Re: Add an option to skip loading missing publication to avoid logical replication failure |
Previous Message | Peter Smith | 2025-03-10 04:11:24 | Re: Parallel heap vacuum |