From: | "Joseph D Wagner" <joe(at)josephdwagner(dot)info> |
---|---|
To: | "'PostgreSQL Hackers'" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | proposal: Allocate work_mem From Pool |
Date: | 2022-07-11 03:45:38 |
Message-ID: | 005201d894d8$b0424710$10c6d530$@josephdwagner.info |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I'm new here, so forgive me if this is a bad idea or my lack of knowledge on
how to optimize PostgreSQL.
I find PostgreSQL to be great with a large number of small transactions,
which covers most use cases. However, my experience has not been so great
on the opposite end -- a small number of large transactions, i.e. Big Data.
I had to increase work_mem to 3GB to stop my queries from spilling to disk.
However, that's risky because it's 3GB per operation, not per
query/connection; it could easily spiral out of control.
I think it would be better if work_mem was allocated from a pool of memory
as need and returned to the pool when no longer needed. The pool could
optionally be allocated from huge pages. It would allow large and mixed
workloads the flexibility of grabbing more memory as needed without spilling
to disk while simultaneously being more deterministic about the maximum that
will be used.
Thoughts?
Thank you for your time.
Joseph D. Wagner
My specifics:
-64 GB box
-16 GB shared buffer, although queries only using about 12 GB of that
-16 GB effective cache
-2-3 GB used by OS and apps
-the rest is available for Postgresql queries/connections/whatever as
needed
From | Date | Subject | |
---|---|---|---|
Next Message | Andrey Lepikhov | 2022-07-11 03:54:08 | Re: Fast COPY FROM based on batch insert |
Previous Message | Peter Smith | 2022-07-11 03:41:26 | Re: Handle infinite recursion in logical replication setup |