From: | Tomas Vondra <tomas(at)vondra(dot)me> |
---|---|
To: | Rahila Syed <rahilasyed90(at)gmail(dot)com> |
Cc: | Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Improve monitoring of shared memory allocations |
Date: | 2025-03-28 13:00:21 |
Message-ID: | 66283922-0448-44e7-bbdd-38aaf2be34b9@vondra.me |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 3/28/25 12:10, Rahila Syed wrote:
> Hi Tomas,
>
>
> 1) alignment
>
> There was a comment with a question whether we need to MAXALIGN the
> chunks in dynahash.c, which were originally allocated by ShmemAlloc, but
> now it's part of one large allocation, which is then cut into pieces
> (using pointer arithmetics).
>
> I was not sure whether we need to enforce some alignment, we briefly
> discussed that off-list. I realize you chose to add the alignment, but I
> haven't noticed any comment in the patch why it's needed, and it seems
> to me it may not be quite correct.
>
>
>
> I have added MAXALIGN to specific allocations, such as HASHHDR and
> HASHSEGMENT, with the expectation that allocations in multiples of this,
> like dsize * HASHSEGMENT, would automatically align.
>
Yes, assuming the original allocation is aligned, maxalign-ing the
smaller allocations would ensure that. But there's still the issue with
aligning array elements.
>
>
> Let me explain what I had in mind, and why I think the way v5 doesn't
> actually do that. It took me a while before I understood what alignment
> is about, and for a while it was haunting my patches, so hopefully this
> will help others ...
>
> The "alignment" is about pointers (or addresses), and when a pointer is
> aligned it means the address is a multiple of some number. For example
> 4B-aligned pointer is a multiple of 4B, so 0x00000100 is 4B-aligned,
> while 0x00000101 is not. Sometimes we use data types to express the
> alignment, e.g. int-aligned is 4B-aligned, but that's a detail. AFAIK
> the alignment is always 2^k, so 1, 2, 4, 8, ...
>
> The primary reason for alignment is that some architectures require the
> pointers to be well-aligned for a given data type. For example (int*)
> needs to be int-aligned. If you have a pointer that's not 4B-aligned,
> it'll trigger SIGBUS or maybe SIGSEGV. This was true for architectures
> like powerpc, I don't think x86/arm64 have this restriction, i.e. it'd
> work, even if there might be a minor performance impact. Anyway, we
> still enforce/expect correct alignment, because we may still support
> some of those alignment-sensitive platforms, and it's "tidy".
>
> The other reason is that we sometimes use alignment to add padding, to
> reduce contention when accessing elements in hot arrays. We want to
> align to cacheline boundaries, so that a struct does not require
> accessing more cachelines than really necessary. And also to reduce
> contention - the more cachelines, the higher the risk of contention.
>
>
> Thank you for your explanation. I had a similar understanding. However,
> I believed that MAXALIGN and CACHEALIGN are primarily performance
> optimizations
> that do not impact the correctness of the code. This assumption is based
> on the fact
> that I have not observed any failures on GitHub CI, even when changing
> the alignment
> in this part of the code.
>
I hope it didn't come over as explaining something you already know ...
Apologies if that's the case.
As for why the CI doesn't fail on this, that's because it runs on x86,
which tolerates alignment issues. AFAIK all "modern" platforms do. You'd
need some old platform (I recall we saw SIGBUG on powerpc and s390, or
something like that). It'd probably matter even on x86 with something
like AVX, but that's irrelevant for this patch.
I don't even know what is the performance impact on x86. I think it was
measurable years ago, but it got mostly negligible for recent CPUs.
Another reason is that some of those structs may be "naturally aligned"
because they happen to be multiples of MAXALIGN. For example:
HASHHDR -> 96 bytes
HASHELEMENT -> 16 bytes (this covers HASHBUCKET / HASHSEGMENT too)
>
> Now, back to the patch. The code originally did this in ShmemInitStruct
>
> hashp = ShmemInitStruct(...)
>
> to allocate the hctl, and then
>
> firstElement = (HASHELEMENT *) ShmemAlloc(nelem * elementSize);
>
> in element_alloc(). But this means the "elements" allocation is aligned
> to PG_CACHE_LINE_SIZE, i.e. 128B, because ShmemAllocRaw() does this:
>
> size = CACHELINEALIGN(size);
>
> So it distributes memory in multiples of 128B, and I believe it starts
> at a multiple of 128B.
>
> But the patch reworks this to allocate everything at once, and thus it
> won't get this alignment automatically. AFAIK that's not intentional,
> because no one explicitly mentioned this. And it's may not be quite
> desirable, judging by the comment in ShmemAllocRaw().
>
>
> Yes, the patch reworks this to allocate all the shared memory at once.
> It uses ShmemInitStruct which internally calls ShmemAllocRaw. So the
> whole chunk
> of memory allocated is still CACHEALIGNed.
>
> I mentioned v5 adds alignment, but I think it does not quite do that
> quite correctly. It adds alignment by changing the macros from:
>
> +#define HASH_ELEMENTS_OFFSET(hctl, nsegs) \
> + (sizeof(HASHHDR) + \
> + ((hctl)->dsize * sizeof(HASHSEGMENT)) + \
> + ((hctl)->ssize * (nsegs) * sizeof(HASHBUCKET)))
>
> to
>
> +#define HASH_ELEMENTS_OFFSET(hctl, nsegs) \
> + (MAXALIGN(sizeof(HASHHDR)) + \
> + ((hctl)->dsize * MAXALIGN(sizeof(HASHSEGMENT))) + \
> + ((hctl)->ssize * (nsegs) * MAXALIGN(sizeof(HASHBUCKET))))
>
> First, it uses MAXALIGN, but that's mostly my fault, because my comment
> suggested that - the ShmemAllocRaw however and makes the case for using
> CACHELINEALIGN.
>
>
> Good catch. For a shared hash table, allocations need to be
> CACHELINEALIGNED.
> I think hash_get_init_size does not need to call CACHELINEALIGNED
> explicitly as ShmemInitStruct already does this.
> In that case, the size returned by hash_get_init_size just needs to
> MAXALIGN required structs as per hash_create() requirements and
> CACHELINEALIGN
> will be taken care of in ShmemInitStruct at the time of allocating the
> entire chunk.
>
>
> But more importantly, it adds alignment to all hctl field, and to every
> element of those arrays. But that's not what the alignment was supposed
> to do - it was supposed to align arrays, not individual elements. Not
> only would this waste memory, it would actually break direct access to
> those array elements.
>
>
> I think existing code has occurrences of both i,.e aligning individual
> elements and
> arrays.
> A similar precedent exists in the function hash_estimate_size(), which only
> applies maxalignment to the individual structs like HASHHDR, HASHELEMENT,
> entrysize, but also an array of HASHBUCKET headers.
>
> I agree with you that perhaps we don't need maxalignment for all of
> these structures.
> For ex, HASHBUCKET is a pointer to a linked list of elements, it might
> not require alignment
> if the elements it points to are already aligned.
>
>
> But there's another detail - even before this patch, most of the stuff
> was allocated at once by ShmemInitStruct(). Everything except for the
> elements, so to replicate the alignment we only need to worry about that
> last part. So I think this should do:
>
>
>
> +#define HASH_ELEMENTS_OFFSET(hctl, nsegs) \
> + CACHELINEALIGN(sizeof(HASHHDR) + \
> + ((hctl)->dsize * sizeof(HASHSEGMENT)) + \
> + ((hctl)->ssize * (nsegs) * sizeof(HASHBUCKET)))
>
> This is what the 0003 patch does. There's still one minor difference, in
> that we used to align each segment independently - each element_alloc()
> call allocated a new CACHELINEALIGN-ed chunk, while now have just a
> single chunk. But I think that's OK.
>
>
> Before this patch, following structures were allocated separately using
> ShmemAllocRaw
> directory, each segment(seg_alloc) and a chunk of elements
> (element_alloc). Hence,
> I don't understand why v-0003* CACHEALIGNs in the manner it does.
>
I may not have gotten it quite right. The intent was to align it the
same way as before, and the allocation used to be sized like this:
Size
hash_get_shared_size(HASHCTL *info, int flags)
{
Assert(flags & HASH_DIRSIZE);
Assert(info->dsize == info->max_dsize);
return sizeof(HASHHDR) + info->dsize * sizeof(HASHSEGMENT);
}
But I got confused a bit. I think you're correct here:
> I think if we want to emulate the current behaviour we should do
> something like:
> CACHELINEALIGN(sizeof(HASHHDR) + dsize * sizeof(HASHSEGMENT)) +
> + CACHELINEALIGN(sizeof(HASHBUCKET) * ssize) * nsegs
> + CACHELINEALIGN(init_size * elementSize);
>
> Like you mentioned the only difference would be that we would be
> aligning all elements
> at once instead of aligning individual partitions of elements.
>
Right. I'm still not convinced if this makes any difference, or whether
this alignment was merely a consequence of using ShmemAlloc(). I don't
want to make this harder to understand unnecessarily.
Let's keep this simple - without additional alignment. I'll think about
it a bit more, and maybe add it before commit.
>
> 3) I find the comment before hash_get_init_size a bit unclear/confusing.
> It says this:
>
> * init_size should match the total number of elements allocated during
> * hash table creation, it could be zero for non-shared hash tables
> * depending on the value of nelem_alloc. For more explanation see
> * comments within this function.
> *
> * nelem_alloc parameter is not relevant for shared hash tables.
>
> What does "should match" mean here? Doesn't it *determine* the number of
> elements allocated? What if it doesn't match?
>
>
> by should match I mean - init_size here *is* equal to nelem in
> hash_create() .
>
>
> AFAICS it means the hash table is sized to expect init_size elements,
> but only nelem_alloc elements are actually pre-allocated, right?
>
>
> No. All the init_size elements are pre-allocated for shared hash table
> irrespective of
> nelem_alloc value.
> For non-shared hash tables init_size elements are allocated only
> if it is less than nelem_alloc, otherwise they are allocated as part of
> expansion.
>
>
> But the
> comment says it's init_size which determines the number of elements
> allocated during creation. Confusing.
>
> It says "it could be zero ... depending on the value of nelem_alloc".
> Depending how? What's the relationship.
>
>
> The relationship is defined in this comment:
> /*
> * For a shared hash table, preallocate the requested number of elements.
> * This reduces problems with run-time out-of-shared-memory conditions.
> *
> * For a non-shared hash table, preallocate the requested number of
> * elements if it's less than our chosen nelem_alloc. This avoids wasting
> * space if the caller correctly estimates a small table size.
> */
>
> hash_create code is confusing because the nelem_alloc named variable is used
> in two different cases, In the above case nelem_alloc refers to the one
> returned by choose_nelem_alloc function.
>
> The other nelem_alloc determines the number of elements in each partition
> for a partitioned hash table. This is not what is being referred to in
> the above
> comment.
>
> The bit "For more explanation see comments within this function" is not
> great, if only because there are not many comments within the function,
> so there's no "more explanation". But if there's something important, it
> should be in the main comment, preferably.
>
>
> I will improve the comment in the next version.
>
OK. Do we even need to pass nelem_alloc to hash_get_init_size? It's not
really used except for this bit:
+ if (init_size > nelem_alloc)
+ element_alloc = false;
Can't we determine before calling the function, to make it a bit less
confusing?
regards
--
Tomas Vondra
From | Date | Subject | |
---|---|---|---|
Next Message | vignesh C | 2025-03-28 13:02:57 | Re: speedup COPY TO for partitioned table. |
Previous Message | Andres Freund | 2025-03-28 12:57:25 | Re: AIO v2.5 |