Re: Can this query go faster???

From: Michael Riess <mlriess(at)gmx(dot)de>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Can this query go faster???
Date: 2005-12-06 09:51:25
Message-ID: dn3muo$i9c$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> Hi,
>
> Is it possible to get this query run faster than it does now, by adding
> indexes, changing the query?
>
> SELECT customers.objectid FROM prototype.customers, prototype.addresses
> WHERE
> customers.contactaddress = addresses.objectid
> ORDER BY zipCode asc, housenumber asc
> LIMIT 1 OFFSET 283745
>
> Explain:
>
> Limit (cost=90956.71..90956.71 rows=1 width=55)
> -> Sort (cost=90247.34..91169.63 rows=368915 width=55)
> Sort Key: addresses.zipcode, addresses.housenumber
> -> Hash Join (cost=14598.44..56135.75 rows=368915 width=55)
> Hash Cond: ("outer".contactaddress = "inner".objectid)
> -> Seq Scan on customers (cost=0.00..31392.15
> rows=368915 width=80)
> -> Hash (cost=13675.15..13675.15 rows=369315 width=55)
> -> Seq Scan on addresses (cost=0.00..13675.15
> rows=369315 width=55)
>
> The customers table has an index on contactaddress and objectid.
> The addresses table has an index on zipcode+housenumber and objectid.

When the resulting relation contains all the info from both tables,
indexes won't help, seq scan is inevitable.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Pandurangan R S 2005-12-06 09:52:01 Re: Performance degradation after successive UPDATE's
Previous Message Joost Kraaijeveld 2005-12-06 09:43:49 Can this query go faster???