From: | Greg Smith <gsmith(at)gregsmith(dot)com> |
---|---|
To: | Thomas Finneid <tfinneid(at)fcon(dot)no> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: increase index performance |
Date: | 2009-05-13 01:38:19 |
Message-ID: | alpine.GSO.2.01.0905122129540.7859@westnet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 12 May 2009, Thomas Finneid wrote:
> on a database with 260 GB of data and an index size of 109GB on separate raid
> disks. there are
> 85 city_ids, 2000
> street_ids per city,
> 20 house_ids per street per city
> 5 floor_ids per house_ per street per city
You should test what happens if you reduce the index to just being
(city_id,street_id). Having all the fields in there makes the index
larger, and it may end up being faster to just pull all of the ~100 data
rows for a particular (city_id,street_id) using the smaller index and then
filter out just the ones you need. Having a smaller index to traverse
also means that you'll be more likely to keep all the index blocks in the
buffer cache moving forward.
A second level improvement there is to then CLUSTER on the smaller index,
which increases the odds you'll get all of the rows you need by fetching
only a small number of data pages.
--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2009-05-13 01:41:15 | Re: Any better plan for this query?.. |
Previous Message | Joshua D. Drake | 2009-05-13 00:39:45 | Re: Any better plan for this query?.. |