From: | Xuneng Zhou <xunengzhou(at)gmail(dot)com> |
---|---|
To: | Maxim Orlov <orlovmg(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Ekaterina Sokolova <e(dot)sokolova(at)postgrespro(dot)ru>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Proposal: Limitations of palloc inside checkpointer |
Date: | 2025-03-12 07:27:31 |
Message-ID: | CABPTF7XYajjHFG1quHFUa_t5h8T4SBMqMdkgQcPNmU99yT2S5g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
The patch itself looks ok to me. I'm curious about the trade-offs between
this incremental approach and the alternative of
using palloc_extended() with the MCXT_ALLOC_HUGE flag. The approach of
splitting the requests into fixed-size slices avoids OOM failures or
process termination by the OOM killer, which is good. However, it does add
some overhead with additional lock acquisition/release cycles and memory
movement operations via memmove(). The natural question is whether the
security justify the cost. Regarding the slice size of 1 GB, is this
derived from MaxAllocSize limit, or was it chosen for other performance
reasons? whether a different size might offer better performance under
typical workloads?
It would be helpful to know the reasoning behind these design decisions.
Maxim Orlov <orlovmg(at)gmail(dot)com> 于2025年3月1日周六 00:54写道:
> I think I figured it out. Here is v4.
>
> If the number of requests is less than 1 GB, the algorithm stays the same
> as before. If we need to process more, we will do it incrementally with
> slices of 1 GB.
>
> Best regards,
> Maxim Orlov.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Guo | 2025-03-12 07:45:06 | Re: Wrong results with subquery pullup and grouping sets |
Previous Message | Laurenz Albe | 2025-03-12 07:24:27 | Re: Non-text mode for pg_dumpall |