Unnecessary locks for partitioned tables

From: <n(dot)kobzarev(at)aeronavigator(dot)ru>
To: <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Unnecessary locks for partitioned tables
Date: 2022-11-09 11:11:33
Message-ID: 012801d8f42c$06ea5af0$14bf10d0$@aeronavigator.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello!

Recently I`ve been pushing into life a new project and immediately
experienced an Out of shared memory error while querying partitioned tables.

Imagine a huge busy table that you want to split into hundreds of partitions
by list. Partition key is a kind of surrogate key that can be calculated
outside of database and can be provided as parameter.

create table t (a int) partition by list (a);

select format('create table %I partition of t for values in (%s)', 'p'||x,
x)

from generate_series(0,20) x

Query is executed inside stored procedure or as a prepared statement by,
let's say, Spring JPA.

prepare s (int) as select * from t where a = $1;

explain (analyze) execute s(0);

At the beginning database (12,13,14, or 15 versions) will try to create a
custom execution plan like this one:

Query Text: prepare s (int) as select * from t where a = $1;

Seq Scan on p0 t (cost=0.00..41.88 rows=13 width=4) (actual
time=0.011..0.011 rows=0 loops=1)

Filter: (a = 0)

We have only one partition examined. But suddenly database decides to create
more generic query plan to deal with all range of partitions.

begin;

set local plan_cache_mode = force_generic_plan;

Now all the queries looks like:

Query Text: prepare s (int) as select * from t where a = $1;

Append (cost=0.00..880.74 rows=273 width=4) (actual time=0.014..0.015
rows=0 loops=1)

Subplans Removed: 20

-> Seq Scan on p0 t_1 (cost=0.00..41.88 rows=13 width=4) (actual
time=0.013..0.013 rows=0 loops=1)

Filter: (a = $1)

Here the most important part is a count of removed plans. Database realizes
that all partitions can be skipped except interested one. But under load we
start receiving errors:

ERROR: out of shared memory

Hint: You might need to increase max_locks_per_transaction.

Ok, let`s increase max_locks_per_transaction, but why this type of query
produces so much locks? Looks like DB issues locks for all the partitioned
objects involved in query and ONLY AFTER THAT it does partition pruning.

Here are locks :

select relation::regclass, mode

from pg_locks

where pid = pg_backend_pid() and

locktype = 'relation' and relation::regclass <> 'pg_locks'::regclass;

"p14" "AccessShareLock"

"p20" "AccessShareLock"

"p17" "AccessShareLock"

"pg_type_oid_index" "AccessShareLock"

"p19" "AccessShareLock"

....... etc

But for real life there are also indexes, keys, up to 5000 locks in my case
for single select.

Setting plan_cache_mode = force_custom_plan resolves issue and also makes DB
more performant (*5 in my case, literally from 140 to 650 executions per
second on my development pc).

I`m calling to PG authors, is there any way to optimize prepared queries and
minimize unnecessary locks in generic query plan? For instance, lock parent
table, eliminate unnecessary partitions, lock remining objects. This may
help much in any place where partition pruning possible.

Meanwhile I`m using force_custom_plan - no more huge peaks, all queries are
at the lowest seen execution time. As a drawback there must be an overhead
to compile plans every time.

Thanks,

Nikolay

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Karsten Hilbert 2022-11-09 11:45:17 Q: documentation improvement re collation version mismatch
Previous Message Ron 2022-11-09 10:02:43 Re: Q: pg_hba.conf separate database names file format