From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Maria(dot)L(dot)Wilson-1(at)nasa(dot)gov, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: help speeding up a query in postgres 8.4.5 |
Date: | 2011-05-11 14:31:36 |
Message-ID: | 25611.1305124296@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> writes:
> On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson
> <Maria(dot)L(dot)Wilson-1(at)nasa(dot)gov> wrote:
> This bit:
>> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR
>> INVS
> has both an explicit and an implicit join. This can constrain join
> re-ordering in the planner. Can you change it to explicit joins only
> and see if that helps?
Since there's a WHERE constraint on IV, the outer join is going to be
strength-reduced to an inner join (note the lack of any outer joins in
the plan). So that isn't going to matter.
AFAICS this is just plain an expensive query. The two filter
constraints are not very selective, each passing more than a million
rows up to the join. You can't expect to join millions of rows in no
time flat. About all you can do is try to bump up work_mem enough that
the join won't use temp files --- for something like this, that's likely
to require a setting of hundreds of MB. I'm not sure whether Maria is
using a version in which EXPLAIN ANALYZE will show whether a hash join
was batched, but that's what I'd be looking at.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Shaun Thomas | 2011-05-11 15:55:27 | Re: 'Interesting' prepared statement slowdown on large table join |
Previous Message | Mason S | 2011-05-11 12:04:57 | Re: partition query on multiple cores |