From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | dangal <danielito(dot)gallo(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: temporary files |
Date: | 2019-10-07 21:09:33 |
Message-ID: | 20191007210933.7kla65cvnddnvlwm@development |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Oct 07, 2019 at 01:52:41PM -0700, dangal wrote:
>Hello everyone, I have a question to see if you can help me, I have set
>work_mem in 100 MB but I have the following in the pgbadger
>
>Queries generating the most temporary files (N)
>
>Count Total size Min size Max size Avg size
> 58 3.24 GiB 57.15 MiB 57.15 MiB 57.15 MiB
>
>How could it be that if you require 57 MB, 100 MB will not reach ?
>thanks a lot
>
The on-disk and in-memory representations are not the same, and the
on-disk one is often significantly more efficient. Data that needs 57MB
temporary file may beed ~150MB memory to perform in-memory sort, for
example.
Yet another reason may be that e.g. hash join splits the memory into
batches, and each one has to fit into work_mem. And whevener we hit the
limit, we double the number of batches, i.e. we cut the batch size in
half. Thus the size is somewhere between 50 and 100MB, with ~75MB on
average. Furthermore, we also estimate the batch size before running the
query, so we may start with unnecessarily many batches, pushing the
average batch size down even more.
FWIW it's somewhat suspicious you have 58 temp files with almost no
variability in size. That seems as if a single query was executed
repeatedly. Maybe try looking into what query that is.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2019-10-07 21:19:30 | Re: Table locking during backup |
Previous Message | Artur Zając | 2019-10-07 21:00:08 | Table locking during backup |