From: | Dieter Rehbein <dieter(dot)rehbein(at)skiline(dot)cc> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Join between 2 tables always executes a sequential scan on the larger table |
Date: | 2013-04-03 08:24:54 |
Message-ID: | 83B53A91-DEA4-4528-9FF4-71D2AC92AD13@skiline.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
HiTom,
thanks for your reply. It was the sequential scan on table user (about 1 million rows), which really surprised me. But a sequential scan over 1 million users seems to be more efficient than an index-Scan for 41.000 rows.
If a execute the query with the ID of a competiton with less participants, the query has a plan as expected:
Nested Loop Left Join (cost=0.00..21385.72 rows=1684 width=42) (actual time=1.300..138.223 rows=1757 loops=1)
-> Seq Scan on user_2_competition uc (cost=0.00..7026.93 rows=1684 width=33) (actual time=1.253..90.846 rows=1757 loops=1)
Filter: ((competition_id)::text = '3cc1cb963b988f12013bc737b4590001'::text)
-> Index Scan using user_pkey on "user" u (cost=0.00..8.51 rows=1 width=42) (actual time=0.026..0.026 rows=1 loops=1757)
Index Cond: ((id)::text = (uc.user_id)::text)
Total runtime: 138.505 ms
regards
Dieter
Am 02.04.2013 um 17:45 schrieb Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
Igor Neyman <ineyman(at)perceptron(dot)com> writes:
> The output of explain analyze (Postgres 9.2.3):
> Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)
> Hash Cond: ((uc.user_id)::text = (u.id)::text)
> -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)
> Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)
> Rows Removed by Filter: 80684
> -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)
> Buckets: 2048 Batches: 128 Memory Usage: 589kB
> -> Seq Scan on "user" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)
> Total runtime: 2740.723 ms
> I expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).
According to the numbers, you're fetching about a third of the
user_2_competition table, which is well past the point where an
indexscan is of any use. It's picking the seqscan because it thinks
that's faster, and I'm sure it's right.
The aspect of this plan that actually seems a bit dubious is that it's
hashing the larger input table rather than the smaller one. There's
a thread going on about that in -hackers right now; we think it's
probably putting too much emphasis on the distribution of the join key
as opposed to the size of the table.
One thing that would help is increasing work_mem --- it looks like you
are using the default 1MB. Cranking that up to a few MB would reduce
the number of hash batches needed.
regards, tom lane
Happy Skiing!
Dieter Rehbein
Software Architect | dieter(dot)rehbein(at)skiline(dot)cc
Skiline Media GmbH
Lakeside B03
9020 Klagenfurt, Austria
fon: +43 463 249445-800
fax: +43 463 249445-102
"Erlebe Skifahren neu!"
CONFIDENTIALITY: This e-mail and any attachments are confidential and may also be privileged. If you are not the designated recipient, please notify the sender immediately by reply e-mail and destroy all copies (digital and paper). Any unauthorized disclosure, distribution, copying, storage or use of the information contained in this e-mail or any attachments is strictly prohibited and may be unlawful.
LEGAL: Skiline Media GmbH - Managing Director: Michael Saringer
From | Date | Subject | |
---|---|---|---|
Next Message | Vasilis Ventirozos | 2013-04-03 08:50:06 | Re: Join between 2 tables always executes a sequential scan on the larger table |
Previous Message | Dieter Rehbein | 2013-04-03 08:18:55 | Re: Join between 2 tables always executes a sequential scan on the larger table |