From: | Sergey Konoplev <gray(dot)ru(at)gmail(dot)com> |
---|---|
To: | Sev Zaslavsky <sevzas(at)gmail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: slow query - will CLUSTER help? |
Date: | 2013-12-19 21:24:26 |
Message-ID: | CAL_0b1vGV0kWyVKJM=kOYonEXyt1O0a44STbDpknJKhbFyw_XQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky <sevzas(at)gmail(dot)com> wrote:
> On 12/19/2013 3:34 PM, Sergey Konoplev wrote:
>> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <sevzas(at)gmail(dot)com> wrote:
>>> Table rt_h_nbbo contains several hundred million rows. All rows for a
>>> given
>>> entry_date are appended to this table in an overnight process every night
>>> -
>>> on the order of several million rows per day.
>>
>> Do you perform a regular cleaning of the table with DELETEs or may be
>> you use UPDATEs for some another reason?
>
> At this point we're neither deleting nor updating the data once written to
> the db.
Than I can see two reasons of the problem:
1. The indexed data is too big and index search is getting worth day by day
I would try to create a partial index for one day and repeat the
EXPLAIN ANALYZE with this day. If there will be some significant
improvements then I would start creating partial indexes for every new
day before it starts and drop them after some time when they became
obsolete.
2. You are limited with IO
I would also suggest you to upgrade your storage in this case.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray(dot)ru(at)gmail(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Johansen | 2013-12-19 22:06:18 | Re: Unexpected pgbench result |
Previous Message | Kevin Grittner | 2013-12-19 20:36:19 | Re: query plan not optimal |