From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Tobias Gierke <tobias(dot)gierke(at)code-sourcery(dot)de> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Linear slow-down while inserting into a table with an ON INSERT trigger ? |
Date: | 2021-07-17 04:40:26 |
Message-ID: | 20210717044026.GA19498@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, Jul 16, 2021 at 11:27:24PM +0200, Tobias Gierke wrote:
> CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql
> AS $function$
> BEGIN
> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;
> RETURN NEW;
> END;
> $function$
>
> I'm trying to insert 30k rows (inside a single transaction) into the parent
The problem is because you're doing 30k updates of data_sync within a txn.
Ideally it starts with 1 tuple in 1 page but every row updated requires
scanning the previous N rows, which haven't been vacuumed (and cannot).
Update is essentially delete+insert, and the table will grow with each update
until the txn ends and it's vacuumed.
pages: 176 removed, 1 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 40000 removed, 1 remain, 0 are dead but not yet removable, oldest xmin: 2027
You could run a single UPDATE rather than 30k triggers.
Or switch to an INSERT on the table, with an index on it, and call
max(last_parent_table_change) from whatever needs to ingest it. And prune the
old entries and vacuum it outside the transaction. Maybe someone else will
have a better suggestion.
--
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Benjamin Scherrey | 2021-07-17 06:48:45 | Re: Linear slow-down while inserting into a table with an ON INSERT trigger ? |
Previous Message | Tobias Gierke | 2021-07-16 21:27:24 | Linear slow-down while inserting into a table with an ON INSERT trigger ? |