From: | Dimitri <dimitrik(dot)fr(at)gmail(dot)com> |
---|---|
To: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
Cc: | PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Any better plan for this query?.. |
Date: | 2009-05-06 08:31:03 |
Message-ID: | 5482c80a0905060131r69c157b3j56fbb36652a6bbd9@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Heikki,
I've already tried a target 1000 and the only thing it changes
comparing to the current 100 (default) is instead of 2404 rows it says
240 rows, but the plan remaining the same..
Rgds,
-Dimitri
On 5/6/09, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> Dimitri wrote:
>> any idea if there is a more optimal execution plan possible for this
>> query:
>>
>> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as
>> hbeg,
>> H.END_DATE as hend, H.NOTE as hnote
>> from HISTORY H, STAT S
>> where S.REF = H.REF_STAT
>> and H.REF_OBJECT = '0000000001'
>> order by H.HORDER ;
>>
>> EXPLAIN ANALYZE output on 8.4:
>> QUERY
>> PLAN
>> ------------------------------------------------------------------------------------------------------------------------------------------------
>> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual
>> time=1.341..1.343 rows=20 loops=1)
>> Sort Key: h.horder
>> Sort Method: quicksort Memory: 30kB
>> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual
>> time=1.200..1.232 rows=20 loops=1)
>> Hash Cond: (h.ref_stat = s.ref)
>> -> Index Scan using history_ref_idx on history h
>> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052
>> rows=20 loops=1)
>> Index Cond: (ref_object = '0000000001'::bpchar)
>> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual
>> time=1.147..1.147 rows=1000 loops=1)
>> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000
>> width=45) (actual time=0.005..0.325 rows=1000 loops=1)
>> Total runtime: 1.442 ms
>> (10 rows)
>>
>> Table HISTORY contains 200M rows, only 20 needed
>> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY
>> values.
>
> The bad doesn't look too bad to me, although the planner is
> over-estimating the number of matches in the history table (2404 vs 20).
> That's a bit surprising given how simple the predicate is. Make sure
> you've ANALYZEd the table. If that's not enough, you can try to increase
> the statistics target for ref_object column, ie. ALTER TABLE history
> ALTER COLUMN ref_object SET STATISTICS 500. That might give you a
> different plan, maybe with a nested loop join instead of hash join,
> which might be faster in this case.
>
> --
> Heikki Linnakangas
> EnterpriseDB http://www.enterprisedb.com
>
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri | 2009-05-06 08:33:42 | Re: Any better plan for this query?.. |
Previous Message | Chris | 2009-05-06 08:22:27 | Re: Any better plan for this query?.. |