Re: postgres vacuum memory limits

From: Vijaykumar Jain <vijaykumarjain(dot)github(at)gmail(dot)com>
To: Ayub M <hiayub(at)gmail(dot)com>
Cc: PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: postgres vacuum memory limits
Date: 2021-08-01 14:34:27
Message-ID: CAM+6J95ioT9z346_TNqO-uMZfHGaBdiHOY5fAj16Kk=ZnTWumg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sun, 1 Aug 2021 at 10:27, Ayub M <hiayub(at)gmail(dot)com> wrote:
>
> Hello, when maintenance_work_mem and autovacuum_work_mem are set, my
understanding is that the vacuum and autovacuum sessions should be limited
to use the memory limits set by these parameters. But I am seeing more
memory being used than these limits by autovacuum sessions, any reason why
this would happen?
>
> Please see below examples, where maintenance_work_mem is set to 20mb and
shared_buffers is 128mb. When I see the memory for this session in top, it
shows 162mb. But when default_statistics_target is increased to 3000, the
session usage is 463mb, which is way more than 20mb maintenance_work_mem
and 128mb shared_buffer. Shouldn't the process memory be capped to 20+128mb?
>
>
> postgres=# show maintenance_work_mem ;
> maintenance_work_mem
> ----------------------
> 20MB
> (1 row)
> postgres=# vacuum analyze mdm_context;
> VACUUM
> postgres=# show shared_buffers;
> shared_buffers
> ----------------
> 128MB
> (1 row)
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND

> 62246 postgres 20 0 422892 165996 139068 R 57.1 15.7 25:06.34
postgres: postgres postgres [local] VACUUM
>

your assumption may be right, but i am not sure of the interpretation from
top.
I have to admit I am not great at understanding top output (RES vs VIRT) in
general when it comes to limits.

I did a demo cgroup setup with limit max memory to 5MB, started psql using
cgexec and ran vacuum with maintenance_work_mem = 1024 (1MB)
it ran fine. I am not sharing the results, it may divert the convo.

The vacuum process seems to get dead tuples as a function of
maintenance_work_mem, and if it is small, it will loop many times (the run
may take longer)
but respect that limit (+- tolerance). also, vacuum verbose only prints CPU
usage per iteration of removing dead tupes from pages, but no mem usage. so
idk.

DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: vacuuming "public.t"
DEBUG: launched 1 parallel vacuum worker for index vacuuming (planned: 1)
DEBUG: scanned index "t_col1_idx" to remove 174518 row versions
DETAIL: CPU: user: 0.04 s, system: 0.00 s, elapsed: 0.04 s
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: starting parallel vacuum worker for bulk delete
DEBUG: scanned index "t_col1_idx1" to remove 174518 row versions
DETAIL: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.02 s
DEBUG: CommitTransaction(1) name: unnamed; blockState:
PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: "t": removed 174518 dead item identifiers in 1424 pages

i can be corrected, as i could not really get values from the source to
profile mem usage per function call.
from the source,
i have a feeling, shared_mem has nothing to do with vacuum, but i may be
wrong.

I think someone who is more aware of the process/code can throw more light.
but thanks for asking. will learn something figuring this out.

FROM THE CODE COMMENTS:
* The major space usage for LAZY VACUUM is storage for the array of dead
tuple
* TIDs. We want to ensure we can vacuum even the very largest relations
with
* finite memory space usage. To do that, we set upper bounds on the
number of
* tuples we will keep track of at once.
*
* We are willing to use at most maintenance_work_mem (or perhaps
* autovacuum_work_mem) memory space to keep track of dead tuples. We
* initially allocate an array of TIDs of that size, with an upper limit
that
* depends on table size (this limit ensures we don't allocate a huge area
* uselessly for vacuuming small tables). If the array threatens to
overflow,
* we suspend the heap scan phase and perform a pass of index cleanup and
page
* compaction, then resume the heap scan with an empty TID array.
*
* If we're processing a table with no indexes, we can just vacuum each page
* as we go; there's no need to save up multiple tuples to minimize the
number
* of index scans performed. So we don't use maintenance_work_mem memory
for
* the TID array, just enough to hold as many heap tuples as fit on one
page.
*
* Lazy vacuum supports parallel execution with parallel worker processes.
In
* a parallel vacuum, we perform both index vacuum and index cleanup with
* parallel worker processes. Individual indexes are processed by one
vacuum
* process. At the beginning of a lazy vacuum (at lazy_scan_heap) we
prepare
* the parallel context and initialize the DSM segment that contains shared
* information as well as the memory space for storing dead tuples. When
* starting either index vacuum or index cleanup, we launch parallel worker
* processes. Once all indexes are processed the parallel worker processes
* exit. After that, the leader process re-initializes the parallel context
* so that it can use the same DSM for multiple passes of index vacuum and
* for performing index cleanup. For updating the index statistics, we need
* to update the system table and since updates are not allowed during
* parallel mode we update the index statistics after exiting from the
* parallel mode.
*

vacuumlazy.c
<https://github.com/postgres/postgres/blob/70685385d70f8da73ab189a72f46311091ff09be/src/backend/access/heap/vacuumlazy.c#L12>
compute_max_dead_tuples
<https://github.com/postgres/postgres/blob/70685385d70f8da73ab189a72f46311091ff09be/src/backend/access/heap/vacuumlazy.c#L3457>
parallel_workers
<https://github.com/postgres/postgres/blob/70685385d70f8da73ab189a72f46311091ff09be/src/backend/access/heap/vacuumlazy.c#L3802>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ganesh Korde 2021-08-01 14:40:07 ERROR: ImportError: No module named 'psutil'
Previous Message Alban Hertroys 2021-08-01 10:32:20 Re: Help with writing a generate_series(tsmultirange, interval)