From: | Adarsh Sharma <eddy(dot)adarsh(at)gmail(dot)com> |
---|---|
To: | Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com> |
Cc: | PostgreSQL General Discussion Forum <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Join Bad Performance on different data types |
Date: | 2014-03-04 08:19:59 |
Message-ID: | CAGx-QqK0iOgYW7Z4fpZEQ=XekRgDPUAekYsEECds8b_w97yd0A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Mar 4, 2014 at 1:13 PM, Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com>wrote:
>
> On Tue, Mar 4, 2014 at 2:57 PM, Adarsh Sharma <eddy(dot)adarsh(at)gmail(dot)com>wrote:
>
>> I tried creating simple and gin indexes on the column(t_ids) but still
>> not helping. Anyone has any idea or faced this before. Postgresql version
>> is 9.2.
>>
>
> have you done a vacuum analyze or analyze after this step?
>
> You might have to disable sequential scans
> set enable_seqscan=off;
>
> And then fire the query.
>
>
Thanks Sameer. yes i already did vacuum analyze but i tried
enable_seqscan=off; this time and explain analyze finished in 34 seconds.
PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=10651634346.70..10651780073.12 rows=4163612 width=64)
(actual time=34375.675..34764.705 rows=751392 loops=1)
-> Nested Loop (cost=10000000000.03..10646590270.49 rows=336271747
width=64) (actual time=0.217..24988.534 rows=6541944 loops=1)
-> Nested Loop (cost=10000000000.02..10012318364.23
rows=33628639 width=116) (actual time=0.177..3427.380 rows=1431 loops=1)
-> Nested Loop (cost=10000000000.01..10001045237.36
rows=3368723 width=38) (actual time=0.138..3373.767 rows=1431 loops=1)
-> Nested Loop (cost=10000000000.00..10000097742.23
rows=340181 width=38) (actual time=0.047..2151.183 rows=418145 loops=1)
Join Filter: (td.entity_type_id = gtt.id)
Rows Removed by Join Filter: 1269335
-> Seq Scan on graph5 td
(cost=10000000000.00..10000077008.13 rows=345413 width=33) (actual
time=0.020..1231.823 rows=421870 loops=1)
-> Materialize (cost=0.00..9.33 rows=4
width=13) (actual time=0.000..0.001 rows=4 loops=421870)
-> Index Scan using geo_type_pkey on
graph6 gtt (cost=0.00..9.31 rows=4 width=13) (actual time=0.009..0.012
rows=4 loops=1)
-> Index Scan using graph2_pkey on graph2 gcr
(cost=0.01..2.69 rows=10 width=33) (actual time=0.002..0.002 rows=0
loops=418145)
Index Cond: (id = ANY (td.graph3_id))
-> Index Scan using graph3_pkey on graph3 gtd
(cost=0.01..3.25 rows=10 width=115) (actual time=0.035..0.036 rows=1
loops=1431)
Index Cond: (id = ANY (gcr.t_ids))
-> Index Scan using graph1_pkey on graph1 glt (cost=0.01..18.51
rows=10 width=55) (actual time=0.085..9.082 rows=4572 loops=1431)
Index Cond: (id = ANY (gtd.lat_long_id_list))
Total runtime: 34810.040 ms
Is dere any way i can rewrite the query so that i need not to set
seqscan-off, because i dont want to embed one more line in application
code and also dont want to change global setting in postgresql.conf to
disable seqscan.
Thanks
> This email may contain confidential, privileged or copyright material and
> is solely for the use of the intended recipient(s).
>
From | Date | Subject | |
---|---|---|---|
Next Message | Sameer Kumar | 2014-03-04 08:56:08 | Re: Join Bad Performance on different data types |
Previous Message | Pavel Stehule | 2014-03-04 07:48:38 | Re: Join Bad Performance on different data types |