From: | Timur Irmatov <thor(at)sarkor(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: index usage |
Date: | 2003-01-17 15:08:14 |
Message-ID: | 87109443711.20030117200814@sarkor.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
TL> Timur Irmatov <thor(at)sarkor(dot)com> writes:
>> Limit (cost=0.00..0.19 rows=1 width=6) (actual time=0.43..0.43 rows=0 loops=1)
>> -> Index Scan using timeindex on mediumstats (cost=0.00..2898.96 rows=15185 width=6) (actual time=0.42..0.42 rows=0 loops=1)
TL> The planner has absolutely no clue about the behavior of your function,
TL> and so its estimate of the number of rows matched is way off, leading to
TL> a poor estimate of the cost of an indexscan. There is not much to be
TL> done about this in the current system (though I've speculated about the
TL> possibility of computing statistics for functional indexes).
you're absolutely right.
thanks.
TL> Just out of curiosity, why don't you lose all this year/month/day stuff
TL> and use a timestamp column? Less space, more functionality.
:-)
Well, I've a seen a lot of people on pgsql-general mailing list with
problems with dates, timestamps, and I was just scared of using
PostreSQL date and time types and functions..
May be, I should just try it myself before doing it other way...
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-01-17 15:11:17 | Re: 7.3.1 New install, large queries are slow |
Previous Message | Tom Lane | 2003-01-17 14:57:04 | Re: index usage |