From: | Sim Zacks <sim(at)compulab(dot)co(dot)il> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: multiple group by on same table |
Date: | 2011-05-04 10:59:21 |
Message-ID: | 4DC13189.9040704@compulab.co.il |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 05/04/2011 01:51 PM, Leonardo Francalanci wrote:
> Hi,
>
>
> I'm going to need to GROUP BY the same table
> multiple times. That is, something like:
>
> select (some aggregate functions here) from
> tableA group by f1, f2
>
> select (some other aggregate functions here) from
> tableA group by f3, f4
>
> etc
>
> The table is pretty large; can someone suggest the
> best way of doing it? Is running N queries at the same
> time (that is, using N connections with N threads in
> the client code) the only way to speed up things (so
> that the "concurrent scan" thing can help)? Or it's
> more likely that it won't help that much, given that
> we have a fairly good storage? Just trying to get some
> ideas before starting testing....
>
> (table will be 5M rows, where some of the group by
> select could return 3-400K groups)
>
> Leonard
The fastest way of doing it would probably be to build materialized
views for each reporting query and call it your data warehouse.
In any case, each resultset will require its own query, so the only way
to get it done in parallel is separate connections. If you use a single
connection, the queries will have to be run serial (one after the other).
Sim
From | Date | Subject | |
---|---|---|---|
Next Message | Misa Simic | 2011-05-04 11:14:50 | Re: pervasiveness of surrogate (also called synthetic) keys |
Previous Message | Leonardo Francalanci | 2011-05-04 10:51:08 | multiple group by on same table |