From: | Scott Carey <scott(at)richrelevance(dot)com> |
---|---|
To: | Doug Cole <dougcole(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: optimizing query with multiple aggregates |
Date: | 2009-10-22 21:48:29 |
Message-ID: | C706213D.14E5E%scott@richrelevance.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 10/21/09 3:51 PM, "Doug Cole" <dougcole(at)gmail(dot)com> wrote:
> I have a reporting query that is taking nearly all of it's time in aggregate
> functions and I'm trying to figure out how to optimize it. The query takes
> approximately 170ms when run with "select *", but when run with all the
> aggregate functions the query takes 18 seconds. The slowness comes from our
> attempt to find distribution data using selects of the form:
>
> SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)
>
> repeated across many different x,y values and fields to build out several
> histograms of the data. The main culprit appears to be the CASE statement,
> but I'm not sure what to use instead. I'm sure other people have had similar
> queries and I was wondering what methods they used to build out data like
> this?
You might be able to do this with plain aggregates. Define a function that
generates your partitions that you can group by, then aggregate functions
for the outputs
In either case, rather than each result being a column in one result row,
each result will be its own row.
Each row would have a column that defines the type of the result (that you
grouped on), and one with the result value. If each is just a sum, its
easy. If there are lots of different calculation types, it would be harder.
Potentially, you could wrap that in a subselect to pull out each into its
own column but that is a bit messy.
Also, in 8.4 window functions could be helpful. PARTITION BY something that
represents your buckets perhaps?
http://developer.postgresql.org/pgdocs/postgres/tutorial-window.html
This will generally force a sort, but shouldn't be that bad.
The function used for the group by or partition by could just be a big case
statement to generate a unique int per bucket, or a truncate/rounding
function. It just needs to spit out a unique result for each bucket for the
group or partition.
> Thanks for your help,
> Doug
>
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2009-10-22 22:08:00 | Re: Partitioned Tables and ORDER BY |
Previous Message | Scott Carey | 2009-10-22 21:33:35 | Re: Table Clustering & Time Range Queries |