From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
---|---|
To: | Tony Cebzanov <tonyceb(at)andrew(dot)cmu(dot)edu> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Performance problem with row count trigger |
Date: | 2009-04-02 16:44:42 |
Message-ID: | 49D4EB7A.1070002@postnewspapers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Tony Cebzanov wrote:
> The throughput of the first batch of 1,000 is diminished, but still
> tolerable, but after 10,000 inserts, it's gotten much worse. This
> pattern continues, to the point where performance is unacceptable after
> 20k or 30k inserts.
>
> To rule out the performance of the trigger mechanism itself, I swapped
> the trigger out for one that does nothing. The results were the same as
> without the trigger (the first set of numbers), which leads me to
> believe there's something about the UPDATE statement in the trigger that
> is causing this behavior.
MVCC bloat from the constant updates to the assoc_count table, maybe?
If you're using 8.3, I'd expect HOT to save you here. Are you using an
older version of PostgreSQL? If not, have you by any chance defined an
index on assoc_count ?
Also, try to keep records in your `dataset' table as narrow as possible.
If the catalog_id, t_begin, t_end, ctime and mtime fields do not change
almost as often as the assoc_count field, split them into a separate
table with a foreign key referencing dataset_id, rather than storing
them directly in the dataset table.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Tony Cebzanov | 2009-04-02 17:40:53 | Re: Performance problem with row count trigger |
Previous Message | Peter Willis | 2009-04-02 16:09:56 | Re: FUNCTION problem |