| From: | Jingtang Zhang <mrdrivingduck(at)gmail(dot)com> |
|---|---|
| To: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
| Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Make reorder buffer max_changes_in_memory adjustable? |
| Date: | 2024-07-22 03:28:56 |
| Message-ID: | CAPsk3_BcmRTMTSnicUj6dyns2AtoipkqjM2Xp6uDgQj9n4kJ6g@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Thanks, Tomas.
> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's
> not clear to me how would that help 12/13, because there's ~0% chance
> we'd backpatch that ...
What I mean is not about back-patch work. Things should happen on publisher
side?
Consider when the publisher is a PostgreSQL v14+~master (with streaming
support) and subscriber is a 12/13 where streaming is not supported, the
publisher
would still have the risk of OOM. The same thing should happen when we use a
v14+~master as publisher and a whatever open source CDC as subscriber.
> Wouldn't it be better to have adjusts the value automatically, somehow?
> For example, before restoring the changes, we could count the number of
> transactions, and set it to 4096/ntransactions or something like that.
> Or do something smarter by estimating tuple size, to count it in the
> logical__decoding_work_mem budget.
Yes, I think this issue should have been solved when
logical_decoding_work_mem
was initially been introduced, but it didn't. There could be some reasons
like
sub-transaction stuff which has been commented in the header of
reorderbuffer.c.
regards, Jingtang
| From | Date | Subject | |
|---|---|---|---|
| Next Message | shveta malik | 2024-07-22 03:41:48 | Re: Allow logical failover slots to wait on synchronous replication |
| Previous Message | Amit Kapila | 2024-07-22 03:15:40 | Re: Slow catchup of 2PC (twophase) transactions on replica in LR |