From: | Joseph Shraibman <jks(at)selectacast(dot)net> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | cache table |
Date: | 2004-05-04 02:24:02 |
Message-ID: | c76urn$438$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I have a big table with some int fields. I frequently need to do
queries like:
SELECT if2, count(*) FROM table WHERE if1 = 20 GROUP BY if2;
The problem is that this is slow and frequently requires a seqscan. I'd
like to cache the results in a second table and update the counts with
triggers, but this would a) require another UPDATE for each
INSERT/UPDATE which would slow down adding and updating of data and b)
produce a large amount of dead rows for vacuum to clear out.
It would also be nice if this small table could be locked into the pg
cache somehow. It doesn't need to store the data on disk because the
counts can be generated from scratch?
So what is the best solution to this problem? I'm sure it must come up
pretty often.
From | Date | Subject | |
---|---|---|---|
Next Message | Joseph Shraibman | 2004-05-04 02:30:48 | Re: linux distro for better pg performance |
Previous Message | Pailloncy Jean-Gérard | 2004-05-03 20:58:28 | INSERT RULE |