From: | khare <khare(at)students(dot)uiuc(dot)edu> |
---|---|
To: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Cc: | pgman <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Subject: | Add free-behind capability for large sequential scans |
Date: | 2002-02-12 17:41:11 |
Message-ID: | 3C908751@webmail.uiuc.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi All,
(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA offcampus from India.
(2) We have been asked to enhance postgreSQL in one of our assignments. So I
have chosen to pick "Add free-behind capability for large sequential scans"
from TODO list. Many thanks to Mr. Bruce Momjian who helped me out and
suggested to make a patch for this problem.
(3)As explained to me by Mr. Bruce, the problem description is that if say
cache size is 1 mb and a sequential scan is done through a 2mb file over and
over again the cache becomes useless.Because by the time the second read of
the table happens the first 1mb has been forced out of the cache already.Thus
the idea is not to cache very large sequential scans, but to cache index scans
small sequential scans.
(4)what I think the problem arises because of default LRU page replacement
policy. So I think we have to make use of MRU or LRU-K page replacement
policies.
(5)But I am not sure and I wish more input into the problem description from
you all. I have started reading the buffer manager code and I found that
freelist.c may be needed to be modified and may be some other too since we
have to identify the large sequential scans.
Please help me out
Regards
Amit Kumar Khare
From | Date | Subject | |
---|---|---|---|
Next Message | Karel Zak | 2002-02-12 17:48:09 | SET SESSION AUTHORIZATION |
Previous Message | Bruce Momjian | 2002-02-12 17:39:19 | Re: [HACKERS] Feature enhancement request : use of libgda |