From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tomas(at)vondra(dot)me> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Andres Freund <andres(at)anarazel(dot)de> |
Subject: | Re: scalability bottlenecks with (many) partitions (and more) |
Date: | 2024-09-01 23:53:39 |
Message-ID: | CA+TgmoZ-40G7-WoLSsEH+7FqUzrP_0ndCOkewJ7ifAO6MVx0EA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Sep 1, 2024 at 3:30 PM Tomas Vondra <tomas(at)vondra(dot)me> wrote:
> I don't think that's possible with hard-coded size of the array - that
> allocates the memory for everyone. We'd need to make it variable-length,
> and while doing those benchmarks I think we actually already have a GUC
> for that - max_locks_per_transaction tells us exactly what we need to
> know, right? I mean, if I know I'll need ~1000 locks, why not to make
> the fast-path array large enough for that?
I really like this idea. I'm not sure about exactly how many fast path
slots you should get for what value of max_locks_per_transaction, but
coupling the two things together in some way sounds smart.
> Of course, the consequence of this would be making PGPROC variable
> length, or having to point to a memory allocated separately (I prefer
> the latter option, I think). I haven't done any experiments, but it
> seems fairly doable - of course, not sure if it might be more expensive
> compared to compile-time constants.
I agree that this is a potential problem but it sounds like the idea
works well enough that we'd probably still come out quite far ahead
even with a bit more overhead.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Korotkov | 2024-09-01 23:55:50 | Re: pgsql: Implement pg_wal_replay_wait() stored procedure |
Previous Message | Michael Paquier | 2024-09-01 23:44:49 | Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential Patch |