From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | houmanb <houman(at)gmx(dot)at> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: SELECT AND AGG huge tables |
Date: | 2012-10-16 00:04:34 |
Message-ID: | CAMkU=1wre+8mroYUvEYBArGTpifdV5P8Hk6NuEp9XizNSyQQVg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Oct 15, 2012 at 1:59 PM, houmanb <houman(at)gmx(dot)at> wrote:
> Dear all,
> We have a DB containing transactional data.
> There are about *50* to *100 x 10^6* rows in one *huge* table.
> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us
> a constant seeking time.
>
> A typical select (see below) takes about 200 secs. As the database is the
> backend for a web-based reporting facility 200 to 500 or even more secs
> response times are not acceptable for the customer.
>
> Is there any way to speed up select statements like this:
>
> SELECT
> SUM(T.x),
> SUM(T.y),
> SUM(T.z),
> AVG(T.a),
> AVG(T.b)
> FROM T
> GROUP BY
> T.c
> WHERE
> T.creation_date=$SOME_DATE;
>
> There is an Index on T.c. But would it help to partition the table by T.c?
Probably not.
But an index on creation_date, or on (creation_date, c) might. How
many records are there per day? If you add a count(*) to your select,
what would typical values be?
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2012-10-16 02:42:36 | Re: SELECT AND AGG huge tables |
Previous Message | Bob Lunney | 2012-10-15 23:44:59 | Re: SELECT AND AGG huge tables |