From: | Patrick Hatcher <PHatcher(at)macys(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow query. Any way to speed up? |
Date: | 2006-01-06 18:55:28 |
Message-ID: | OFD2EE5DFB.2FA6D785-ON882570EE.00679B43-882570EE.0067F414@FDS.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Duh sorry. We will eventually move to 8.x, it's just a matter of finding
the time:
Explain analyze
Select gmmid, gmmname, divid, divname, feddept, fedvend,itemnumber as
mstyle,amc_week_id,
sum(tran_itm_total) as net_dollars
FROM
public.tbldetaillevel_report a2 join cdm.cdm_ddw_tran_item a1 on
a1.item_upc = a2.upc
join public.date_dim a3 on a3.date_dim_id = a1.cal_date
where
a3.date_dim_id between '2005-10-30' and '2005-12-31'
and
a1.appl_id in ('MCOM','NET')
and
a1.tran_typ_id in ('S','R')
group by 1,2,3,4,5,6,7,8
order by 1,2,3,4,5,6,7,8
GroupAggregate (cost=1648783.47..1650793.74 rows=73101 width=65) (actual
time=744556.289..753136.278 rows=168343 loops=1)
-> Sort (cost=1648783.47..1648966.22 rows=73101 width=65) (actual
time=744556.236..746634.566 rows=1185096 loops=1)
Sort Key: a2.gmmid, a2.gmmname, a2.divid, a2.divname, a2.feddept,
a2.fedvend, a2.itemnumber, a3.amc_week_id
-> Merge Join (cost=1598067.59..1642877.78 rows=73101 width=65)
(actual time=564862.772..636550.484 rows=1185096 loops=1)
Merge Cond: ("outer".upc = "inner".item_upc)
-> Index Scan using report_upc_idx on tbldetaillevel_report
a2 (cost=0.00..47642.36 rows=367309 width=58) (actual
time=82.512..65458.137 rows=365989 loops=1)
-> Sort (cost=1598067.59..1598250.34 rows=73100 width=23)
(actual time=564764.506..566529.796 rows=1248862 loops=1)
Sort Key: a1.item_upc
-> Hash Join (cost=94.25..1592161.99 rows=73100
width=23) (actual time=493500.913..548924.039 rows=1248851 loops=1)
Hash Cond: ("outer".cal_date =
"inner".date_dim_id)
-> Seq Scan on cdm_ddw_tran_item a1
(cost=0.00..1547562.88 rows=8754773 width=23) (actual
time=14.219..535704.691 rows=10838135 loops=1)
Filter: ((((appl_id)::text = 'MCOM'::text)
OR ((appl_id)::text = 'NET'::text)) AND ((tran_typ_id = 'S'::bpchar) OR
(tran_typ_id = 'R'::bpchar)))
-> Hash (cost=94.09..94.09 rows=64 width=8)
(actual time=362.953..362.953 rows=0 loops=1)
-> Index Scan using date_date_idx on
date_dim a3 (cost=0.00..94.09 rows=64 width=8) (actual
time=93.710..362.802 rows=63 loops=1)
Index Cond: ((date_dim_id >=
'2005-10-30'::date) AND (date_dim_id <= '2005-12-31'::date))
Total runtime: 753467.847 ms
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
415-422-1610
Tom Lane
<tgl(at)sss(dot)pgh(dot)pa(dot)u
s> To
Patrick Hatcher
01/05/06 09:07 PM <PHatcher(at)macys(dot)com>
cc
pgsql-performance(at)postgresql(dot)org
Subject
Re: [PERFORM] Slow query. Any way
to speed up?
Patrick Hatcher <PHatcher(at)macys(dot)com> writes:
> The following SQL takes 4+ mins to run. I have indexes on all join
fields
> and I've tried rearranging the table orders but haven't had any luck.
Please show EXPLAIN ANALYZE output, not just EXPLAIN. It's impossible
to tell whether the planner is making any wrong guesses when you can't
see the actual times/rowcounts ...
(BTW, 7.4 is looking pretty long in the tooth.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ian Westmacott | 2006-01-06 19:02:57 | Re: improving write performance for logging |
Previous Message | Jim C. Nasby | 2006-01-06 16:37:32 | Re: improving write performance for logging |