Re: Nested loops overpriced

From: Peter Eisentraut <peter_e(at)gmx(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Nested loops overpriced
Date: 2007-05-10 15:30:22
Message-ID: 200705101730.22883.peter_e@gmx.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Am Mittwoch, 9. Mai 2007 19:40 schrieb Tom Lane:
> I remember having dithered about whether
> to try to avoid counting the same physical relation more than once in
> total_table_pages, but this example certainly suggests that we
> shouldn't. Meanwhile, do the estimates get better if you set
> effective_cache_size to 1GB or so?

Yes, that makes the plan significantly cheaper (something like 500,000 instead
of 5,000,000), but still a lot more expensive than the hash join (about
100,000).

> To return to your original comment: if you're trying to model a
> situation with a fully cached database, I think it's sensible
> to set random_page_cost = seq_page_cost = 0.1 or so. You had
> mentioned having to decrease them to 0.02, which seems unreasonably
> small to me too, but maybe with the larger effective_cache_size
> you won't have to go that far.

Heh, when I decrease these parameters, the hash join gets cheaper as well. I
can't actually get it to pick the nested-loop join.

--
Peter Eisentraut
http://developer.postgresql.org/~petere/

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Peter Eisentraut 2007-05-10 15:35:06 Re: Nested loops overpriced
Previous Message Susan Russo 2007-05-10 14:08:41 Re: REVISIT specific query (not all) on Pg8 MUCH slower than Pg7