From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Bryce Covert <bryce(at)brycecovertoperations(dot)com> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Slow query due to slow I/O |
Date: | 2013-12-12 23:39:32 |
Message-ID: | CAGTBQpa23mVdRzACUmAp9uu6qCxMTEmUeBtw8kBo6D5i64yV9Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Dec 12, 2013 at 6:49 PM, Bryce Covert <
bryce(at)brycecovertoperations(dot)com> wrote:
> It's strange that it isn't sequential at least for the electric usage, as
> i've clustered using the index that it's using..
>
electricusage is the inner part of the nested loop, which means it will do
~3000 small scans. That's not sequential no matter how much you cluster.
And the join order cannot be reversed (because you're filtering on
premiseaccount).
I had work_mem set to 128mb. I tried bumping it to 1024mb, and I don't
> think I see a in the query plan.
>
128mb already is abusive enough. If anything, you'd have to lower it.
On Thu, Dec 12, 2013 at 8:04 PM, Bryce Covert <
bryce(at)brycecovertoperations(dot)com> wrote:
> Looks like it is doing an index only scan for the first table, but not for
> the second. I tried creating two indexes that theoretically should make it
> not have to go to the physical table.:
> "electricusage_premise_account_id_36bc8999ced10059" btree
> (premise_account_id, from_date, usage)
> "ix_covered_2" btree (premise_account_id, from_date DESC, usage, id)
>
> Any idea why it's not using that?
>
Index-only scans not only need the covering index, they also need fully
visible pages. That takes time to build up.
If after that happens you're still getting poor performance, at that point,
I guess you just have a lousy schema. You're trying to process way too
scattered data too fast.
See, your query processes 15k rows, and reads 18k pages. That's as
scattered as it gets.
The biggest table you've got there (from the looks of this query) is by far
electricusage. You need to cluster that by bucket (since that's your
querying criteria), but your schema doesn't allow that. I'm not sure
whether it's viable, but if it were, I'd normalize bucket in premiseaccount
and de-normalize electricusage to also refer to that bucket directly. That
way, you can filter on electricusage, get a bitmap index scan, and live
happily ever after.
Failing that, create materialized views, assuming your write patterns allow
it.
And failing that, add more hardware. If linode doesn't provide it, move
somewhere else.
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2013-12-13 00:01:54 | Re: Slow query due to slow I/O |
Previous Message | Bryce Covert | 2013-12-12 23:04:13 | Re: Slow query due to slow I/O |