From: | Alban Hertroys <dalroi(at)solfertje(dot)student(dot)utwente(dot)nl> |
---|---|
To: | Phoenix Kiula <phoenix(dot)kiula(at)gmail(dot)com> |
Cc: | PG-General Mailing List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Slow first query despite LIMIT and OFFSET clause |
Date: | 2009-01-27 18:37:22 |
Message-ID: | 984DA324-F956-498D-94B9-B2DA33F1C5A4@solfertje.student.utwente.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Jan 26, 2009, at 4:41 AM, Phoenix Kiula wrote:
> Appreciate any thoughts.
>
> My query is:
>
>
> explain analyze SELECT
> testimonials.url
> ,testimonials.alias
> ,testimonials.aliasEntered
> ,testimonials.title
> ,testimonials.modify_date
> ,testimonials.id
> ,visitcount.visit_count
> ,visitcount.unique_count
> ,visitcount.modify_date
> ,coalesce( extract(epoch from now()) - extract(epoch
> from visitcount.modify_date), 0)
> ,(select count(id) from testimonials WHERE
> testimonials.user_id = 'superman' and testimonials.user_known = 1 and
> testimonials.status = 'Y' ) AS total
> FROM testimonials
> LEFT JOIN visitcount ON testimonials.id = visitcount.id
> WHERE
> testimonials.user_id = 'superman'
> and testimonials.user_known = 1
> and testimonials.status = 'Y'
> ORDER BY testimonials.modify_date desc
> OFFSET 0 LIMIT 10
> ;
>
>
>
>
> QUERY PLAN
> ----------------------------------------------------------------------------------------------------------------------------------------------------
> Limit (cost=224.68..224.71 rows=10 width=187) (actual
> time=453.429..453.539 rows=10 loops=1)
> InitPlan
> -> Aggregate (cost=63.52..63.53 rows=1 width=8) (actual
> time=89.268..89.271 rows=1 loops=1)
> -> Index Scan using new_idx_userknown on testimonials
> (cost=0.00..63.41 rows=42 width=8) (actual time=0.039..49.968
> rows=10149 loops=1)
> Index Cond: ((user_id)::text = 'superman'::text)
> Filter: (status = 'Y'::bpchar)
> -> Sort (cost=161.16..161.26 rows=42 width=187) (actual
> time=453.420..453.464 rows=10 loops=1)
> Sort Key: testimonials.modify_date
> -> Nested Loop Left Join (cost=0.00..160.02 rows=42
> width=187) (actual time=89.384..395.008 rows=10149 loops=1)
> -> Index Scan using new_idx_userknown on testimonials
> (cost=0.00..63.41 rows=42 width=171) (actual time=0.061..50.990
> rows=10149 loops=1)
> Index Cond: ((user_id)::text = 'superman'::text)
> Filter: (status = 'Y'::bpchar)
> -> Index Scan using visitcount_pkey1 on visitcount
> (cost=0.00..2.28 rows=1 width=24) (actual time=0.007..0.010 rows=1
> loops=10149)
> Index Cond: (testimonials.id = visitcount.id)
> Total runtime: 461.
> 682 ms
> (15 rows)
Does that query plan look any better without the select count(id) from
testimonials?
If so you may be better off keeping track of those counts in a
separate table updated by triggers on the testimonials table. Whether
that really helps depends on how variable your selectors are to
determine those counts. If those counts are generally very low the
benefit will probably be minimal.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.
!DSPAM:737,497f5466747032672819277!
From | Date | Subject | |
---|---|---|---|
Next Message | Alban Hertroys | 2009-01-27 18:41:02 | Re: How to do an UPDATE for all the fields that do NOT break a constraint? |
Previous Message | justin | 2009-01-27 18:27:49 | Re: performance advice needed: join vs explicit subselect |