From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: HASH_CHUNK_SIZE vs malloc rounding |
Date: | 2016-11-28 17:27:18 |
Message-ID: | 6214.1480354038@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
> I bet other allocators also do badly with "32KB plus a smidgen". To
> minimise overhead we'd probably need to try to arrange for exactly
> 32KB (or some other power of 2 or at least factor of common page/chunk
> size?) to arrive into malloc, which means accounting for both
> nodeHash.c's header and aset.c's headers in nodeHash.c, which seems a
> bit horrible. It may not be worth doing anything about.
Yeah, the other problem is that without a lot more knowledge of the
specific allocator, we shouldn't really assume that it's good or bad with
an exact-power-of-2 request --- it might well have its own overhead.
It is an issue though, and not only in nodeHash.c. I'm pretty sure that
StringInfo also makes exact-power-of-2 requests for no essential reason,
and there are probably many other places.
We could imagine providing an mmgr API function along the lines of "adjust
this request size to the nearest thing that can be allocated efficiently".
That would avoid the need for callers to know about aset.c overhead
explicitly. I'm not sure how it could deal with platform-specific malloc
vagaries though :-(
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2016-11-28 17:28:40 | Re: pg_dump / copy bugs with "big lines" ? |
Previous Message | Tom Lane | 2016-11-28 17:18:21 | Re: Autovacuum breakage from a734fd5d1 |