From: | Dan Harris <fbsd(at)drivefaster(dot)net> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: query performance question |
Date: | 2008-06-05 15:43:06 |
Message-ID: | 4848098A.5070500@drivefaster.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
tv(at)fuzzy(dot)cz wrote:
>
> 3) Build a table with totals or maybe subtotals, updated by triggers. This
> requires serious changes in application as well as in database, but solves
> issues of 1) and may give you even better results.
>
> Tomas
>
>
I have tried this. It's not a magic bullet. We do our billing based on
counts from huge tables, so accuracy is important to us. I tried
implementing such a scheme and ended up abandoning it because the
summary table became so full of dead tuples during and after large bulk
inserts that it slowed down selects on that table to an unacceptable
speed. Even with a VACUUM issued every few hundred inserts, it still
bogged down due to the constant churn of the inserts.
I ended up moving this count tracking into the application level. It's
messy and only allows a single instance of an insert program due to the
localization of the counts in program memory, but it was the only way I
found to avoid the penalty of constant table churn on the triggered inserts.
-Dan
From | Date | Subject | |
---|---|---|---|
Next Message | Kenneth Marshall | 2008-06-05 17:16:39 | Re: query performance question |
Previous Message | Heikki Linnakangas | 2008-06-05 06:18:29 | Re: insert/update tps slow with indices on table > 1M rows |