| From: | Hannu Krosing <hannu(at)tm(dot)ee> |
|---|---|
| To: | "P(dot) Dwayne Miller" <dmiller(at)espgroup(dot)net> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Re: select count... |
| Date: | 2001-07-13 16:09:47 |
| Message-ID: | 3B4F1D4B.F089C04F@tm.ee |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
"P. Dwayne Miller" wrote:
>
> I think 4 seconds is way too long to return the results. And NULLs in a
> column should not change the answer. It seems logical that even a sequential
> scan of an index would be much faster than a scan of the table (in this case
> the record size is fairly large).
>
> I'm trying to optimize queries that are being ported from another DBMS, where
> the same query above returns in 10s of milliseconds. 4 secs is simply too
> long. So I'm looking for a way to do it faster.
>
> MS SQL Server docs have optimization hints for such a query and using the
> 'count(requestnumber)' syntax, where requestnumber is an indexed field, was
> suggested.
Could you possibly mean "select(distinct requestnumber)" ?
If the performance of count(xxx) is critical for your app, I suggest
keeping the
counts in a separate table with a trigger. Postgres can not optimise to
use
indexes _only_ , as indexes don't keep commit information - it must be
checked
from data heap.
---------------
Hannu
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Bruce Momjian | 2001-07-13 16:10:29 | Re: AW: AW: Re: [GENERAL] Vacuum and Transactions |
| Previous Message | Bruce Momjian | 2001-07-13 16:07:22 | Re: [PATCH] To remove EXTEND INDEX |