From: | Tomas Vondra <tomas(at)vondra(dot)me> |
---|---|
To: | Jakub Wartak <jakub(dot)wartak(at)enterprisedb(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Andres Freund <andres(at)anarazel(dot)de> |
Subject: | Re: scalability bottlenecks with (many) partitions (and more) |
Date: | 2024-09-17 20:16:04 |
Message-ID: | 3cebb4ab-1168-4259-8cb8-8a8ed7efeb43@vondra.me |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I've spent the last couple days doing all kinds of experiments trying to
find regressions caused by the patch, but no success. Which is good.
Attached is a script that just does a simple pgbench on a tiny table,
with no or very few partitions. The idea is that this will will fit into
shared buffers (thus no I/O), and will fit into the 16 fast-path slots
we have now. It can't benefit from the patch - it can only get worse, if
having more fast-path slots hurts.
I ran this on my two machines, and in both cases the results are +/- 1%
from the master for all combinations of parameters (clients, mode,
number of partitions, ..). In most cases it's actually much closer,
particularly with the default max_locks_per_transaction value.
For higher values of the GUC, I think it's fine too - the differences
are perhaps a bit larger (~1.5%), but it's clearly hardware specific (i5
gets a bit faster, xeon a bit slower). And I'm pretty sure people who
increased that GUC value likely did that because of locking many rels,
and so will actually benefit from the increased fast-path capacity.
At this point I'm pretty happy and confident the patch is fine. Unless
someone objects, I'll get it committed after going over over it one more
time. I decided to commit that as as a single change - it would be weird
to have an intermediate state with larger arrays in PGPROC, when that's
not something we actually want.
I still haven't found any places in the docs that should mention this,
except for the bit about max_locks_per_transaction GUC. There's nothing
in SGML mentioning details of fast-path locking. I thought we have some
formula to calculate per-connection memory, but I think I confused that
with the shmmem formulas we had in "Managing Kernel Resources". But even
that no longer mentions max_connections in master.
regards
--
Tomas Vondra
Attachment | Content-Type | Size |
---|---|---|
lock-test.sh | application/x-shellscript | 1.5 KB |
lock-test.pdf | application/pdf | 13.9 KB |
lock-test.ods | application/vnd.oasis.opendocument.spreadsheet | 96.3 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Florents Tselai | 2024-09-17 20:53:58 | Re: jsonb_strip_nulls with arrays? |
Previous Message | Nathan Bossart | 2024-09-17 19:22:21 | miscellaneous pg_upgrade cleanup |