From: | Karthik Krishnakumar <karthikk(at)zohocorp(dot)com> |
---|---|
To: | "MichaelDBA" <MichaelDBA(at)sqlexec(dot)com> |
Cc: | "pgsql-admin" <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Prepared statements generating a lot of temp files. |
Date: | 2023-03-23 10:50:54 |
Message-ID: | 1870e133a02.b7332a6352205.2292083279467439134@zohocorp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
thanks will check this.
is there some formula you are using to arrive at this number for work_mem?
we have queries that frequently have more than a dozen joins/aggregates. so have been conservative in changing work_mem...
btw, using pg version 10, (i know its old and am trying to convince folks to upgrade :)
thanks
karthik
---- On Thu, 23 Mar 2023 15:56:36 +0530 MichaelDBA <mailto:MichaelDBA(at)sqlexec(dot)com> wrote ---
Looks like you can handle 256MB work_mem.
Give it a shot and monitor temp_files. You do have log_temp_files = 0,
right? Also, keep an eye out for "out of memory" log file errors. What
PG version are you using?
Karthik Krishnakumar wrote on 3/23/2023 6:21 AM:
Regards,
Michael Vitale
mailto:michaelvitale(at)sqlexec(dot)com
703-600-9343
RAM -
256GB
shared_buffers - 64GB
maintenance_work_mem
- 1GB
work_mem - 24MB
there are ~50 active
connections at any given time.
from
pg_badger - average temp file size for bulk insert/update is about
200MB, max size is multiple GB's depending on the table.
it
is a write heavy workload - with inserts/updates happening around the
clock.
thanks
karthik
----
On Thu, 23 Mar 2023 15:33:11 +0530 MichaelDBA <mailto:MichaelDBA(at)sqlexec(dot)com> wrote ---
Perhaps
take the other approach: increase
work_mem to make the bulk inserts fit into memory. You can easily undo
work_mem changes. It only requires a sighup: reload, not restart.
What
memory do you have now and what is work_mem currently set to?
Also, have many concurrent, active connections do you average at a time?
Regards,
Michael
Vitale
Karthik
Krishnakumar wrote on 3/23/2023 5:59 AM:
Regards,
Michael Vitale
mailto:michaelvitale(at)sqlexec(dot)com
703-600-9343
thanks - checked with the devs and
it does look like the application is doing some sort of a bulk insert,
and at the moment it cannot be changed to use "COPY FROM".
will
limiting bulk inserts to match the work_mem(assuming this is the guc
that is used in this case) reduce this disk activity?
thanks,
karthik
----
On Thu, 23 Mar 2023 14:43:19 +0530 mailto:Michaeldba(at)sqlexec(dot)com wrote ---
Look at
the size of the bind variables and also whether you are doing bulk
inserts
Sent from my iPad
On Mar 23, 2023, at 5:10 AM,
Karthik Krishnakumar <mailto:karthikk(at)zohocorp(dot)com>
wrote:
Hi admins,
I
am analyzing temp files being generated using pg_badger.
Under
"queries generating the most temporary files", I am finding
insert/update queries (via prepared statements) writing a lot of data to
temp files.
I am trying to figure why inserts are
generating such temp files.
Why are inserts/updates
generating temp files?
what can be done to avoid this ?
thanks,
karthik
From | Date | Subject | |
---|---|---|---|
Next Message | MichaelDBA | 2023-03-23 10:54:20 | Re: Prepared statements generating a lot of temp files. |
Previous Message | MichaelDBA | 2023-03-23 10:26:36 | Re: Prepared statements generating a lot of temp files. |