From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Poor select count(*) performance |
Date: | 2009-02-24 02:54:18 |
Message-ID: | 20090224025418.GP32672@frubble.xen.chris-lamb.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Feb 23, 2009 at 05:44:05PM -0800, Mike Ivanov wrote:
> An hour ago it took 8 seconds, one minute ago the same query took just only
> 7 milliseconds.
>
> Any ideas why the execution time varies so wildly?
>
> Explain Analyze gives:
>
> Aggregate (cost=2000.08..2000.09 rows=1 width=0) (actual time=6.962..6.963 rows=1 loops=1)
> -> Index Scan using lingq_card_context_id on lingq_card
> (cost=0.00..1998.68 rows=561 width=0) (actual time=0.025..5.045 rows=2830 loops=1)
> Index Cond: (context_id = 68672)
> Total runtime: 7.011 ms
If you're unlucky in the example above, none of those 2830 rows will be
in memory and you'll have to wait for the disk to bring them all back.
Depending on where these are on disk and how fast your disks are this
could take up to 30 seconds.
If you want this sort of thing to go quicker you could try CLUSTERing
the table on this index, but then this will slow down other queries that
want data to come off the disk in a specific order. It's a balancing
act!
--
Sam http://samason.me.uk/
From | Date | Subject | |
---|---|---|---|
Next Message | Eus | 2009-02-24 04:23:17 | PostgreSQL fast query is too slow as function |
Previous Message | Bruce Momjian | 2009-02-24 02:43:33 | Re: Product Roadmap question and request for recommendation |