From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Tyrrill, Ed" <tyrrill_ed(at)emc(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow queries on big table |
Date: | 2007-05-18 19:59:22 |
Message-ID: | 1736.1179518362@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Tyrrill, Ed" <tyrrill_ed(at)emc(dot)com> writes:
> Index Scan using backup_location_pkey on backup_location
> (cost=0.00..1475268.53 rows=412394 width=8) (actual
> time=3318.057..1196723.915 rows=2752 loops=1)
> Index Cond: (backup_id = 1070)
> Total runtime: 1196725.617 ms
If we take that at face value it says the indexscan is requiring 434
msec per actual row fetched. Which is just not very credible; the worst
case should be about 1 disk seek per row fetched. So there's something
going on that doesn't meet the eye.
What I'm wondering about is whether the table is heavily updated and
seldom vacuumed, leading to lots and lots of dead tuples being fetched
and then rejected (hence they'd not show in the actual-rows count).
The other thing that seems pretty odd is that it's not using a bitmap
scan --- for such a large estimated rowcount I'd have expected a bitmap
scan not a plain indexscan. What do you get from EXPLAIN ANALYZE if
you force a bitmap scan? (Set enable_indexscan off, and enable_seqscan
too if you have to.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Kroeger | 2007-05-18 20:05:41 | Re: Slow queries on big table |
Previous Message | Tom Lane | 2007-05-18 19:37:41 | Re: 121+ million record table perf problems |