From: | "Ozer, Pam" <pozer(at)automotive(dot)com> |
---|---|
To: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Slow Query- Bad Row Estimate |
Date: | 2010-10-26 23:27:28 |
Message-ID: | 216FFB77CBFAEE4B8EE4DF0A939FF1D1018342@mail-001.corp.automotive.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I have the following query:
select distinct Region.RegionShort as RegionShort
,County.County as County
from Region
join PostalCodeRegionCountyCity on
(PostalCodeRegionCountyCity.RegionId=Region.RegionId)
join DealerGroupGeoCache on
(DealerGroupGeoCache.RegionId=PostalCodeRegionCountyCity.RegionId)
and
(DealerGroupGeoCache.CountyId=PostalCodeRegionCountyCity.CountyId)
and
(DealerGroupGeoCache.CityId=PostalCodeRegionCountyCity.CityId)
join County on (PostalCodeRegionCountyCity.CountyId=County.CountyId)
where (DealerGroupGeoCache.DealerGroupId=13) and
(PostalCodeRegionCountyCity.RegionId=5)
With the following Explain:
"HashAggregate (cost=6743.96..6747.36 rows=34 width=11) (actual
time=854.407..854.425 rows=57 loops=1)"
" -> Nested Loop (cost=0.00..6743.28 rows=34 width=11) (actual
time=0.062..762.698 rows=163491 loops=1)"
" -> Nested Loop (cost=0.00..6679.19 rows=34 width=11) (actual
time=0.053..260.001 rows=163491 loops=1)"
" -> Index Scan using region_i00 on region
(cost=0.00..3.36 rows=1 width=5) (actual time=0.009..0.011 rows=1
loops=1)"
" Index Cond: (regionid = 5)"
" -> Merge Join (cost=0.00..6672.43 rows=34 width=10)
(actual time=0.040..189.654 rows=163491 loops=1)"
" Merge Cond: ((postalcoderegioncountycity.countyid =
dealergroupgeocache.countyid) AND (postalcoderegioncountycity.cityid =
dealergroupgeocache.cityid))"
" -> Index Scan using postalcoderegioncountycity_i06
on postalcoderegioncountycity (cost=0.00..716.05 rows=2616 width=10)
(actual time=0.018..1.591 rows=2615 loops=1)"
" Index Cond: (regionid = 5)"
" -> Index Scan using dealergroupgeocache_i01 on
dealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual
time=0.015..87.689 rows=163491 loops=1)"
" Index Cond:
((dealergroupgeocache.dealergroupid = 13) AND
(dealergroupgeocache.regionid = 5))"
" -> Index Scan using county_i00 on county (cost=0.00..1.77
rows=1 width=12) (actual time=0.002..0.002 rows=1 loops=163491)"
" Index Cond: (county.countyid =
dealergroupgeocache.countyid)"
"Total runtime: 854.513 ms"
The statistics have been recently updated and it does not change the bad
estimates.
The DealerGroupGeoCache Table has 765392 Rows, And the query returns 57
rows.
I am not at all involved in the way the server is set up so being able
to change the settings is not very likely unless it will make a huge
difference.
Is there any way for me to speed up this query without changing the
settings?
If not what would you think the changes that would be needed?
We are currently running Postgres8.4 with the following settings.
shared_buffers = 500MB #
min 128kB
effective_cache_size = 1000MB
max_connections = 100
temp_buffers = 100MB
work_mem = 100MB
maintenance_work_mem = 500MB
max_files_per_process = 10000
seq_page_cost = 1.0
random_page_cost = 1.1
cpu_tuple_cost = 0.1
cpu_index_tuple_cost = 0.05
cpu_operator_cost = 0.01
default_statistics_target = 1000
autovacuum_max_workers = 1
#log_min_messages = DEBUG1
#log_min_duration_statement = 1000
#log_statement = all
#log_temp_files = 128
#log_lock_waits = on
#log_line_prefix = '%m %u %d %h %p %i %c %l %s'
#log_duration = on
#debug_print_plan = on
Any help is appreciated,
Pam
From | Date | Subject | |
---|---|---|---|
Next Message | James Cloos | 2010-10-26 23:45:12 | Re: CPUs for new databases |
Previous Message | Merlin Moncure | 2010-10-26 23:16:53 | Re: Postgres insert performance and storage requirement compared to Oracle |