Re: 7.3.1 New install, large queries are slow

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Roman Fail" <rfail(at)posportal(dot)com>
Cc: "Josh Berkus" <josh(at)agliodbs(dot)com>, sszabo(at)megazone23(dot)bigpanda(dot)com, pgsql-performance(at)postgresql(dot)org
Subject: Re: 7.3.1 New install, large queries are slow
Date: 2003-01-16 19:46:33
Message-ID: 26712.1042746393@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Roman Fail" <rfail(at)posportal(dot)com> writes:
> -> Merge Join (cost=1543595.18..1545448.76 rows=1 width=172) (actual time=1195311.88..1195477.32 rows=5 loops=1)
> Merge Cond: ("outer".batchid = "inner".batchid)
> -> Sort (cost=476.17..476.18 rows=4 width=102) (actual time=30.57..30.59 rows=17 loops=1)
> Sort Key: b.batchid
> -> Nested Loop (cost=0.00..476.14 rows=4 width=102) (actual time=25.21..30.47 rows=17 loops=1)
> -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.65 rows=1 width=78) (actual time=23.81..23.82 rows=1 loops=1)
> Index Cond: (merchid = '701252267'::character varying)
> -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=1.38..6.55 rows=17 loops=1)
> Index Cond: ("outer".merchantid = b.merchantid)
> Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)
> -> Sort (cost=1543119.01..1544045.79 rows=370710 width=70) (actual time=1194260.51..1194892.79 rows=368681 loops=1)
> Sort Key: d.batchid
> -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..1489103.46 rows=370710 width=70) (actual time=5.26..1186051.44 rows=370307 loops=1)
> Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))

The expensive part of this is clearly the sort and merge of the rows
extracted from batchdetail. The index on tranamount is not helping
you at all, because the condition (between 500 and 700) isn't very
selective --- it picks up 370000 rows --- and since those rows are
totally randomly scattered in the table, you do a ton of random
seeking. It's actually faster to scan the table linearly --- that's why
enable_indexscan=off was faster.

However, I'm wondering why the thing picked this plan, when it knew it
would get only a few rows out of the m/b join (estimate 4, actual 17,
not too bad). I would have expected it to use an inner indexscan on
d.batchid. Either you've not got an index on d.batchid, or there's a
datatype mismatch that prevents the index from being used. What are the
datatypes of d.batchid and b.batchid, exactly? If they're not the same,
either make them the same or add an explicit coercion to the query, like
WHERE d.batchid = b.batchid::typeof_d_batchid

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message jasiek 2003-01-16 20:48:26 Re: 7.3.1 New install, large queries are slow
Previous Message Stephan Szabo 2003-01-16 19:35:55 Re: 7.3.1 New install, large queries are slow