| From: | Kenneth Marshall <ktm(at)rice(dot)edu> |
|---|---|
| To: | Dan Harris <fbsd(at)drivefaster(dot)net> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: query performance question |
| Date: | 2008-06-05 17:16:39 |
| Message-ID: | 20080605171639.GE5624@it.is.rice.edu |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Dan,
Did you try this with 8.3 and its new HOT functionality?
Ken
On Thu, Jun 05, 2008 at 09:43:06AM -0600, Dan Harris wrote:
> tv(at)fuzzy(dot)cz wrote:
>>
>> 3) Build a table with totals or maybe subtotals, updated by triggers. This
>> requires serious changes in application as well as in database, but solves
>> issues of 1) and may give you even better results.
>>
>> Tomas
>>
>>
> I have tried this. It's not a magic bullet. We do our billing based on
> counts from huge tables, so accuracy is important to us. I tried
> implementing such a scheme and ended up abandoning it because the summary
> table became so full of dead tuples during and after large bulk inserts
> that it slowed down selects on that table to an unacceptable speed. Even
> with a VACUUM issued every few hundred inserts, it still bogged down due to
> the constant churn of the inserts.
> I ended up moving this count tracking into the application level. It's
> messy and only allows a single instance of an insert program due to the
> localization of the counts in program memory, but it was the only way I
> found to avoid the penalty of constant table churn on the triggered
> inserts.
>
> -Dan
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Dan Harris | 2008-06-05 20:22:49 | Re: query performance question |
| Previous Message | Dan Harris | 2008-06-05 15:43:06 | Re: query performance question |