From: | Chris Kratz <chris(dot)kratz(at)vistashare(dot)com> |
---|---|
To: | Rod Taylor <rbt(at)sitesell(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance Tuning |
Date: | 2005-02-09 22:15:26 |
Message-ID: | 200502091715.26974.chris.kratz@vistashare.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wednesday 09 February 2005 03:27 pm, you wrote:
---snip---
> > We continue to tune our individual queries where we can, but it seems we
> > still are waiting on the db a lot in our app. When we run most queries,
> > top shows the postmaster running at 90%+ constantly during the duration
> > of the request.
>
> Is this for the duration of a single request or 90% constantly?
No, this is during the processing of a request. The rest of the time, it sits
idle.
We thought we would post our config and see if there was something obvious we
were missing. I expect the only real answer is to continue to optimise the
sql our app generates since compexity seems to be the issue.
> If it's a single request, odds are you're going through much more
> information than you need to. Lots of aggregate work (max / min) perhaps
> or count(*)'s where an approximation would do?
Yes, many of our queries heavily use common aggregates and grouping. And the
explains bears out that we spend most of our time in sorts related to the
grouping, aggregating, etc. The problem we often need to get multiple
records per person, but then summarize that data per person. Our users want
Accurate, Fast and Complex. It's hard to convince them they can only have 2
of the 3. :-)
> > Our question is simply this, is it better to invest in a faster processor
> > at this point, or are there configuration changes to make it faster?
> > I've done
>
> If it's for a single request, you cannot get single processors which are
> much faster than what you describe as having.
>
> Want to send us a few EXPLAIN ANALYZE's of your longer running queries?
Many (most) of our queries are dynamic based on what the user needs.
Searches, statistics gathering, etc are all common tasks our users do.
Here is an explain from a common search giving a list of people. This runs in
about 4.2s (4.5s with web page generation) which is actually pretty amazing
when you think about what it does. It's just that we are always looking for
speed in the web environment since concurrent usage can be high at times
making the server feel less responsive. I'm looking at possibly moving this
into lazy materialized views at some point since I can't seem to make the sql
go much faster.
Sort (cost=8165.28..8198.09 rows=13125 width=324) (actual
time=4116.714..4167.915 rows=13124 loops=1)
Sort Key: system_name_id, fullname_lfm_sort
-> GroupAggregate (cost=6840.96..7267.53 rows=13125 width=324) (actual
time=2547.928..4043.255 rows=13124 loops=1)
-> Sort (cost=6840.96..6873.78 rows=13125 width=324) (actual
time=2547.876..2603.938 rows=14115 loops=1)
Sort Key: system_name_id, fullname_last_first_mdl, phone,
daytime_phone, email_address, fullname_lfm_sort, firstname, is_business, ssn,
inactive
-> Subquery Scan foo (cost=5779.15..5943.21 rows=13125
width=324) (actual time=2229.877..2459.003 rows=14115 loops=1)
-> Sort (cost=5779.15..5811.96 rows=13125 width=194)
(actual time=2229.856..2288.350 rows=14115 loops=1)
Sort Key: dem.nameid, dem.name_float_lfm_sort
-> Hash Left Join (cost=2354.58..4881.40
rows=13125 width=194) (actual time=1280.523..2139.423 rows=14115 loops=1)
Hash Cond: ("outer".relatednameid =
"inner".nameid)
-> Hash Left Join (cost=66.03..1889.92
rows=13125 width=178) (actual time=576.228..1245.760 rows=14115 loops=1)
Hash Cond: ("outer".nameid =
"inner".nameid)
-> Merge Left Join
(cost=0.00..1758.20 rows=13125 width=174) (actual time=543.056..1015.657
rows=13124 loops=1)
Merge Cond: ("outer".inactive =
"inner".validanswerid)
-> Index Scan using
namemaster_inactive_idx on namemaster dem (cost=0.00..3714.19 rows=13125
width=163) (actual time=0.594..188.219 rows=13124 loops=1)
Filter: (programid = 55)
-> Index Scan using
validanswerid_pk on validanswer ina (cost=0.00..1103.61 rows=46367 width=19)
(actual time=0.009..360.218 rows=26005 loops=1)
-> Hash (cost=65.96..65.96 rows=31
width=8) (actual time=33.053..33.053 rows=0 loops=1)
-> Nested Loop
(cost=0.00..65.96 rows=31 width=8) (actual time=0.078..25.047 rows=1874
loops=1)
-> Index Scan using
relationship_programid on relationship s (cost=0.00..3.83 rows=1 width=4)
(actual time=0.041..0.047 rows=1 loops=1)
Index Cond:
(programid = 55)
Filter:
(inter_agency_id = 15530)
-> Index Scan using
"relationshipdetail_relatio-4" on relationshipdetail r (cost=0.00..61.17
rows=77 width=12) (actual time=0.017..9.888 rows=1874 loops=1)
Index Cond:
(r.relationshipid = "outer".relationshipid)
-> Hash (cost=2142.84..2142.84 rows=58284
width=24) (actual time=704.197..704.197 rows=0 loops=1)
-> Seq Scan on namemaster rln155301
(cost=0.00..2142.84 rows=58284 width=24) (actual time=0.015..402.784
rows=58284 loops=1)
Total runtime: 4228.945 ms
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Kratz | 2005-02-09 22:17:59 | Re: Performance Tuning |
Previous Message | Merlin Moncure | 2005-02-09 22:08:48 | Re: Performance Tuning |