From: | Dennis Bjorklund <db(at)zigo(dot)dhs(dot)org> |
---|---|
To: | Cosimo Streppone <cosimo(at)streppone(dot)it> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: select count(*) on large tables |
Date: | 2004-04-08 10:54:29 |
Message-ID: | Pine.LNX.4.44.0404081248430.4551-100000@zigo.dhs.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, 8 Apr 2004, Cosimo Streppone wrote:
> The alternative solution I tried, that has an optimal
> speed up, unfortunately is not a way out, and it is based
> on "EXPLAIN SELECT count(*)" output parsing, which
> is obviously *not* reliable.
Try this to get the estimate:
SELECT relname, reltuples from pg_class order by relname;
> The times always get better doing a vacuum (and eventually
> reindex) of the table, and they slowly lower again.
Yes, the estimate is updated by the analyze.
> Is there an estimate time for this issue to be resolved?
It's not so easy to "fix". The naive fixes makes other operations slower,
most notably makes things less concurrent which is bad since it wont scale
as good for many users then.
You can always keep the count yourself and have some triggers that update
the count on each insert and delete on the table. It will of course make
all inserts and deletes slower, but if you count all rows often maybe it's
worth it. Most people do not need to count all rows in a table anyway. You
usually count all rows such as this and that (some condition).
--
/Dennis Björklund
From | Date | Subject | |
---|---|---|---|
Next Message | Geoffrey | 2004-04-08 11:56:53 | Re: good pc but bad performance,why? |
Previous Message | Priem, Alexander | 2004-04-08 10:10:10 | Re: data=writeback |