Re: small temp files

From: Scott Ribe <scott_ribe(at)elevated-dev(dot)com>
To: Paul Smith* <paul(at)pscs(dot)co(dot)uk>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
Cc: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: small temp files
Date: 2024-07-22 13:56:06
Message-ID: 6C269096-47A8-4F23-87AC-512656D4A084@elevated-dev.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

> ...with each operation generally being allowed to use as much memory as this value specifies before it starts to write data into temporary files.

So, doesn't explain the 7452-byte files. Unless an operation can use a temporary file as an addendum to work_mem, instead of spilling the RAM contents to disk as is my understanding.

> So, if it's doing lots of joins, there may be lots of bits of temporary data which together add up to more than work_mem.

If it's doing lots of joins, each will get work_mem--there is no "adding up" among operations using work_mem.

> You expect the smallest temporary file to be 128MB? I.e., if the memory used exceeds work_mem all of it gets put into the temp file at that point? Versus only the amount of data that exceeds work_mem getting pushed out to the temporary file. The overflow only design seems much more reasonable - why write to disk that which fits, and already exists, in memory.

Well, I don't know of an algorithm which can effectively sort 128MB + 7KB of data using 128MB of RAM and a 7KB file. Same for many of the other operations which use work_mem, so yes, I expected spill over to start with 128MB file and grow it as needed. If I'm wrong and there are operations which can effectively use temp files as adjunct, then that would be the answer to my question. Does anybody know for sure that this is the case?

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2024-07-22 14:54:06 Re: small temp files
Previous Message David G. Johnston 2024-07-22 13:49:53 Re: small temp files