Re: 7.3.1 New install, large queries are slow

From: Tomasz Myrta <jasiek(at)klaster(dot)net>
To: Roman Fail <rfail(at)posportal(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: 7.3.1 New install, large queries are slow
Date: 2003-01-15 19:31:34
Message-ID: 3E25B716.4090401@klaster.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Roman Fail wrote:
<cut>

EXPLAIN ANALYZE RESULTS:
Limit (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.20 rows=5 loops=1)
-> Sort (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.18 rows=5 loops=1)
Sort Key: b.batchdate
-> Nested Loop (cost=314181.17..370518.30 rows=1 width=540) (actual time=1148191.12..1168722.09 rows=5 loops=1)
Join Filter: ("inner".batchdetailid = "outer".batchdetailid)
-> Nested Loop (cost=314181.17..370461.79 rows=1 width=502) (actual time=1148167.55..1168671.80 rows=5 loops=1)
Join Filter: ("inner".batchdetailid = "outer".batchdetailid)
-> Nested Loop (cost=314181.17..370429.29 rows=1 width=485) (actual time=1148167.48..1168671.45 rows=5 loops=1)
Join Filter: ("inner".batchdetailid = "outer".batchdetailid)
-> Nested Loop (cost=314181.17..370396.79 rows=1 width=476) (actual time=1148167.41..1168671.08 rows=5 loops=1)
Join Filter: ("inner".batchdetailid = "outer".batchdetailid)
-> Nested Loop (cost=314181.17..314402.47 rows=1 width=457) (actual time=1139099.39..1139320.79 rows=5 loops=1)
Join Filter: ("outer".cardtypeid = "inner".cardtypeid)
-> Merge Join (cost=314181.17..314401.24 rows=1 width=443) (actual time=1138912.13..1139133.00 rows=5 loops=1)
Merge Cond: ("outer".batchid = "inner".batchid)
-> Sort (cost=127418.59..127418.59 rows=3 width=150) (actual time=9681.91..9681.93 rows=17 loops=1)
Sort Key: b.batchid
-> Hash Join (cost=120787.32..127418.56 rows=3 width=150) (actual time=7708.04..9681.83 rows=17 loops=1)
Hash Cond: ("outer".merchantid = "inner".merchantid)
-> Merge Join (cost=120781.58..125994.80 rows=283597 width=72) (actual time=7655.57..9320.49 rows=213387 loops=1)
Merge Cond: ("outer".tranheaderid = "inner".tranheaderid)
-> Index Scan using tranheader_ix_tranheaderid_idx on tranheader t (cost=0.00..121.15 rows=1923 width=16) (actual time=0.15..10.86 rows=1923 loops=1)
Filter: (clientid = 6)
-> Sort (cost=120781.58..121552.88 rows=308520 width=56) (actual time=7611.75..8162.81 rows=329431 loops=1)
Sort Key: b.tranheaderid
-> Seq Scan on batchheader b (cost=0.00..79587.23 rows=308520 width=56) (actual time=0.90..4186.30 rows=329431 loops=1)
Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)
-> Hash (cost=5.74..5.74 rows=1 width=78) (actual time=31.39..31.39 rows=0 loops=1)
-> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.74 rows=1 width=78) (actual time=31.38..31.38 rows=1 loops=1)
Index Cond: (merchid = '701252267'::character varying)
-> Sort (cost=186762.59..186872.62 rows=44010 width=293) (actual time=1127828.96..1128725.39 rows=368681 loops=1)
Sort Key: d.batchid
-> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..176768.18 rows=44010 width=293) (actual time=35.48..1104625.54 rows=370307 loops=1)
Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))
-> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=37.44..37.47 rows=10 loops=5)
-> Seq Scan on purc1 p1 (cost=0.00..44259.70 rows=938770 width=19) (actual time=98.09..4187.32 rows=938770 loops=5)
-> Seq Scan on direct dr (cost=0.00..20.00 rows=1000 width=9) (actual time=0.00..0.00 rows=0 loops=5)
-> Seq Scan on carrental cr (cost=0.00..20.00 rows=1000 width=17) (actual time=0.00..0.00 rows=0 loops=5)
-> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=1.03..7.63 rows=1267 loops=5)
Total runtime: 1168881.12 msec
<cut>

It looks like your execution time is not a hardware, but query problem.
Query nearly doesn't use indexes at all. You said, that that you have normalized database,
so you should have a lot of explicit joins, which work pretty well on Postgresql.

Can you add some examples of your queries? If it is difficult for you,
at least create one example, when you get "Join Filter" on "explain analyze".

From your analyze result:
Seq Scan on batchheader b (cost=0.00..79587.23 rows=308520 width=56)
Can you write what condition and indexes does batchheader have?

Regards,
Tomasz Myrta

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Stephan Szabo 2003-01-15 19:44:01 Re: 7.3.1 New install, large queries are slow
Previous Message Tomasz Myrta 2003-01-15 19:12:28 Re: complicated queries in pl/pgsql