| From: | Cosimo Streppone <cosimo(at)streppone(dot)it> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Subject: | select count(*) on large tables |
| Date: | 2004-04-08 09:43:49 |
| Message-ID: | 40751ED5.1010208@streppone.it |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Hello,
I've followed the last discussion about the particular case of
"select count(*)"s on large tables being somewhat slow.
I've seen also this issue already on the todo list, so I know
it is not a simple question.
This problem arises for me on very large tables, which I mean
starting from 1 million rows and above.
The alternative solution I tried, that has an optimal
speed up, unfortunately is not a way out, and it is based
on "EXPLAIN SELECT count(*)" output parsing, which
is obviously *not* reliable.
The times always get better doing a vacuum (and eventually
reindex) of the table, and they slowly lower again.
Is there an estimate time for this issue to be resolved?
Can I help in some way (code, test cases, ...)?
--
Cosimo
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Chris | 2004-04-08 09:44:29 | Re: data=writeback |
| Previous Message | Priem, Alexander | 2004-04-08 09:26:17 | Re: data=writeback |