From: | Gabriele Bartolini <Gabriele(dot)Bartolini(at)2ndQuadrant(dot)it> |
---|---|
To: | Leonardo Francalanci <m_lists(at)yahoo(dot)it> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: multiple group by on same table |
Date: | 2011-05-04 11:16:38 |
Message-ID: | 11ec066d008a8f68f1ffcb9e9620f61a@2ndquadrant.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ciao Leonardo,
I am not sure if this could apply to your case, but maybe - unless you
have done it before - you could look at windowing functions
(http://www.postgresql.org/docs/current/interactive/tutorial-window.html)
They require PG8.4+ though.
Cheers,
Gabriele
On Wed, 4 May 2011 11:51:08 +0100 (BST), Leonardo Francalanci
<m_lists(at)yahoo(dot)it> wrote:
> Hi,
>
>
> I'm going to need to GROUP BY the same table
> multiple times. That is, something like:
>
> select (some aggregate functions here) from
> tableA group by f1, f2
>
> select (some other aggregate functions here) from
> tableA group by f3, f4
>
> etc
>
> The table is pretty large; can someone suggest the
> best way of doing it? Is running N queries at the same
> time (that is, using N connections with N threads in
> the client code) the only way to speed up things (so
> that the "concurrent scan" thing can help)? Or it's
> more likely that it won't help that much, given that
> we have a fairly good storage? Just trying to get some
> ideas before starting testing....
>
> (table will be 5M rows, where some of the group by
> select could return 3-400K groups)
>
> Leonardo
--
Gabriele Bartolini - 2ndQuadrant Italia
PostgreSQL Training, Services and Support
Gabriele(dot)Bartolini(at)2ndQuadrant(dot)it - www.2ndQuadrant.it
From | Date | Subject | |
---|---|---|---|
Next Message | Jens Wilke | 2011-05-04 12:11:15 | undead index |
Previous Message | Misa Simic | 2011-05-04 11:14:50 | Re: pervasiveness of surrogate (also called synthetic) keys |