From: | "Ken" <ken(at)upfactor(dot)com> |
---|---|
To: | "Richard Huxton" <dev(at)archonet(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Help with tuning this query (with explain analyze finally) |
Date: | 2005-03-04 16:36:26 |
Message-ID: | 000e01c520d8$4f899a90$780ba8c0@javadude |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers-win32 pgsql-performance |
Richard,
What do you mean by summary table? Basically a cache of the query into a
table with replicated column names of all the joins? I'd probably have to
whipe out the table every minute and re-insert the data for each carrier in
the system. I'm not sure how expensive this operation would be, but I'm
guessing it would be fairly heavy-weight. And maintaince would be a lot
harder because of the duplicated columns, making refactorings on the
database more error-prone. Am I understanding your suggestion correctly?
Please correct me if I am.
> Can you turn the problem around? Calculate what you want for all users
> (once every 60 seconds) and stuff those results into a summary table. Then
> let the users query the summary table as often as they like (with the
> understanding that the figures aren't going to update any faster than once
> a minute)
From | Date | Subject | |
---|---|---|---|
Next Message | John Arbash Meinel | 2005-03-04 16:56:39 | Re: Help with tuning this query (with explain analyze finally) |
Previous Message | Richard Huxton | 2005-03-04 15:56:25 | Re: Help with tuning this query (with explain analyze finally) |
From | Date | Subject | |
---|---|---|---|
Next Message | John Arbash Meinel | 2005-03-04 16:56:39 | Re: Help with tuning this query (with explain analyze finally) |
Previous Message | Richard Huxton | 2005-03-04 15:56:25 | Re: Help with tuning this query (with explain analyze finally) |