| From: | Frits Jalvingh <jal(at)etc(dot)to> |
|---|---|
| To: | pgsql-performance(at)lists(dot)postgresql(dot)org |
| Subject: | temp_file_limit? |
| Date: | 2022-12-18 11:48:03 |
| Message-ID: | CAKhTGFXmSBSXYjBXVGiQ_VO7Wz14SrNy2sQOU8KKH3priWKWUQ@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Hi list,
I have a misbehaving query which uses all available disk space and then
terminates with a "cannot write block" error. To prevent other processes
from running into trouble I've set the following:
temp_file_limit = 100GB
The query does parallelize and uses one parallel worker while executing,
but it does not abort when the temp file limit is reached:
345G pgsql_tmp
It does abort way later, after using around 300+ GB:
[53400] ERROR: temporary file size exceeds temp_file_limit (104857600kB)
Where: parallel worker
The comment in the file states that this is a per-session parameter, so
what is going wrong here?
I am using Postgres 14 on Ubuntu.
Regards,
Frits
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Justin Pryzby | 2022-12-18 15:57:05 | Re: temp_file_limit? |
| Previous Message | David Rowley | 2022-12-18 11:06:30 | Re: Postgres12 looking for possible HashAggregate issue workarounds? |