From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Bryan Field-Elliot <bryan_lists(at)netmeme(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to optimize select count(*)..group by? |
Date: | 2005-07-28 16:42:42 |
Message-ID: | 42E90B02.5030708@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Bryan Field-Elliot wrote:
> We have this simple query:
>
> select status, count(*) from customer group by status;
>
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an "explain" on
> this query shows that it is quite costly (and we notice it runs slowly)...
>
> Can someone recommend the best technique to optimize this? We can create
> new indices, we can re-write this query.. But we'd rather not add new
> tables or columns if possible (not just to solve this problem).
There's no other accurate solution. While PG's MVCC system means you
need less locking, it makes it tricky to determine whether a row is
visible without actually checking on disk.
Simplest solution - have a "customer_status_summary" table and add a
trigger to the "customer" table to keep it up to date. That way, you'll
have extremely cheap counts. Make sure you understand what locking
you'll need with your particular solution.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Roman Neuhauser | 2005-07-28 16:52:48 | Re: Cursor Issue?? |
Previous Message | David Fetter | 2005-07-28 16:38:45 | Re: How to optimize select count(*)..group by? |