From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | houmanb <houman(at)gmx(dot)at> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: SELECT AND AGG huge tables |
Date: | 2012-10-15 21:09:51 |
Message-ID: | CAHyXU0zjK1eTimyo0TwOSY-gpt=kFH+TKwcKGcajJtM0NUzskw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Oct 15, 2012 at 3:59 PM, houmanb <houman(at)gmx(dot)at> wrote:
> Dear all,
> We have a DB containing transactional data.
> There are about *50* to *100 x 10^6* rows in one *huge* table.
> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us
> a constant seeking time.
>
> A typical select (see below) takes about 200 secs. As the database is the
> backend for a web-based reporting facility 200 to 500 or even more secs
> response times are not acceptable for the customer.
>
> Is there any way to speed up select statements like this:
>
> SELECT
> SUM(T.x),
> SUM(T.y),
> SUM(T.z),
> AVG(T.a),
> AVG(T.b)
> FROM T
> GROUP BY
> T.c
> WHERE
> T.creation_date=$SOME_DATE;
>
> There is an Index on T.c. But would it help to partition the table by T.c?
> It should be mentioned, that T.c is actually a foreign key to a Table
> containing a
> tiny number of rows (15 rows representing different companies).
> my postgres.conf is actually the default one, despite the fact that we
> increased the value for work_mem=128MB
it might help a little bit or a lot -- it depends on the plan. I'd
also advise raising shared buffers to around 25% of ram for queries
like this.
what's your server load look like while aggregating -- are you storage
or cpu bound? which ssd? how much data churn do you have?
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Matheus de Oliveira | 2012-10-15 21:42:02 | Re: SELECT AND AGG huge tables |
Previous Message | houmanb | 2012-10-15 20:59:16 | SELECT AND AGG huge tables |