From: | Dimitri <dimitrik(dot)fr(at)gmail(dot)com> |
---|---|
To: | Kenneth Marshall <ktm(at)rice(dot)edu> |
Cc: | PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Any better plan for this query?.. |
Date: | 2009-05-06 14:48:21 |
Message-ID: | 5482c80a0905060748rab99aebr39c4c1e6797f2cf4@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Ken,
yes, I may do it, but I did not expect to come into profiling initially :-)
I expected there is just something trivial within a plan that I just
don't know.. :-)
BTW, is there already an integrated profiled within a code? or do I
need external tools?..
Rgds,
-Dimitri
On 5/6/09, Kenneth Marshall <ktm(at)rice(dot)edu> wrote:
> On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote:
>> Hi,
>>
>> any idea if there is a more optimal execution plan possible for this
>> query:
>>
>> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as
>> hbeg,
>> H.END_DATE as hend, H.NOTE as hnote
>> from HISTORY H, STAT S
>> where S.REF = H.REF_STAT
>> and H.REF_OBJECT = '0000000001'
>> order by H.HORDER ;
>>
>> EXPLAIN ANALYZE output on 8.4:
>> QUERY
>> PLAN
>> ------------------------------------------------------------------------------------------------------------------------------------------------
>> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual
>> time=1.341..1.343 rows=20 loops=1)
>> Sort Key: h.horder
>> Sort Method: quicksort Memory: 30kB
>> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual
>> time=1.200..1.232 rows=20 loops=1)
>> Hash Cond: (h.ref_stat = s.ref)
>> -> Index Scan using history_ref_idx on history h
>> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052
>> rows=20 loops=1)
>> Index Cond: (ref_object = '0000000001'::bpchar)
>> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual
>> time=1.147..1.147 rows=1000 loops=1)
>> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000
>> width=45) (actual time=0.005..0.325 rows=1000 loops=1)
>> Total runtime: 1.442 ms
>> (10 rows)
>>
>> Table HISTORY contains 200M rows, only 20 needed
>> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY
>> values.
>>
>> Table definitions:
>> """""""""""""""""""""""""""""""""""""""""""""""""""
>> create table STAT
>> (
>> REF CHAR(3) not null,
>> NAME CHAR(40) not null,
>> NUMB INT not null
>> );
>>
>> create table HISTORY
>> (
>> REF_OBJECT CHAR(10) not null,
>> HORDER INT not null,
>> REF_STAT CHAR(3) not null,
>> BEGIN_DATE CHAR(12) not null,
>> END_DATE CHAR(12) ,
>> NOTE CHAR(100)
>> );
>>
>> create unique index stat_ref_idx on STAT( ref );
>> create index history_ref_idx on HISTORY( ref_object, horder );
>> """""""""""""""""""""""""""""""""""""""""""""""""""
>>
>> NOTE: The same query runs 2 times faster on MySQL.
>>
>> Any idea?..
>>
>> Rgds,
>> -Dimitri
>>
> Dimitri,
>
> Is there any chance of profiling the postgres backend to see
> where the time is used?
>
> Just an idea,
> Ken
>
From | Date | Subject | |
---|---|---|---|
Next Message | Kenneth Marshall | 2009-05-06 15:49:18 | Re: Any better plan for this query?.. |
Previous Message | Kenneth Marshall | 2009-05-06 14:34:46 | Re: Any better plan for this query?.. |