From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Don Baccus <dhogaza(at)pacifier(dot)com> |
Cc: | Zeugswetter Andreas SB SD <ZeugswetterA(at)spardat(dot)at>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, Daniel Kalchev <daniel(at)digsys(dot)bg>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: again on index usage |
Date: | 2002-01-12 15:08:24 |
Message-ID: | 3C405168.8AEAAB8E@tm.ee |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Don Baccus wrote:
>
> Zeugswetter Andreas SB SD wrote:
>
> > This is one of the main problems of the current optimizer which imho rather
> > aggressively chooses seq scans over index scans. During high load this does
> > not pay off.
>
> Bingo ... dragging huge tables through the buffer cache via a sequential
> scan guarantees that a) the next query sequentially scanning the same
> table will have to read every block again (if the table's longer than
> available PG and OS cache) b) on a high-concurrency system other queries
> end up doing extra I/O, too.
>
> Oracle partially mitigates the second effect by refusing to trash its
> entire buffer cache on any given sequential scan. Or so I've been told
> by people who know Oracle well. A repeat of the sequential scan will
> still have to reread the entire table but that's true anyway if the
> table's at least one block longer than available cache.
One radical way to get better-than-average cache behaviour in such
pathologigal casescases would be to discard a _random_ page instead of
LRU page (perhaps tuned to not not select from 1/N of pages on that are
MRU)
-------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Don Baccus | 2002-01-12 15:44:29 | Re: again on index usage |
Previous Message | Hannu Krosing | 2002-01-12 14:51:23 | Re: checkpoint hang in 7.2b4 |