From: | David Fetter <david(at)fetter(dot)org> |
---|---|
To: | Bryan Field-Elliot <bryan_lists(at)netmeme(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to optimize select count(*)..group by? |
Date: | 2005-07-28 16:38:45 |
Message-ID: | 20050728163845.GB22658@fetter.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Jul 28, 2005 at 09:19:49AM -0700, Bryan Field-Elliot wrote:
> We have this simple query:
>
> select status, count(*) from customer group by status;
>
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an
> "explain" on this query shows that it is quite costly (and we notice
> it runs slowly)...
>
> Can someone recommend the best technique to optimize this? We can
> create new indices, we can re-write this query.. But we'd rather not
> add new tables or columns if possible (not just to solve this
> problem).
You're pretty much stuck with either writing triggers that modify a
cache table or having your performance the way it is now.
Cheers,
D
--
David Fetter david(at)fetter(dot)org http://fetter.org/
phone: +1 510 893 6100 mobile: +1 415 235 3778
Remember to vote!
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2005-07-28 16:42:42 | Re: How to optimize select count(*)..group by? |
Previous Message | Bryan Field-Elliot | 2005-07-28 16:19:49 | How to optimize select count(*)..group by? |