From: | khare <khare(at)students(dot)uiuc(dot)edu> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | pgman(at)candle(dot)pha(dot)pa(dot)us |
Subject: | Add free-behind capability for large sequential scans |
Date: | 2002-02-12 07:12:57 |
Message-ID: | 3C8E335C@webmail.uiuc.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi All,
(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA offcampus from India.
(2) We have been asked to enhance postgreSQL in one of our assignments. So I
have chosen to pick "Add free-behind capability for large sequential scans"
from TODO list. Many thanks to Mr. Bruce Momjian who helped me out and
suggested to make a patch for this problem.
(3)As explained to me by Mr. Bruce, the problem description is that if say
cache size is 1 mb and a sequential scan is done through a 2mb file over and
over again the cache becomes useless.Because by the time the second read of
the table happens the first 1mb has been forced out of the cache already.Thus
the idea is not to cache very large sequential scans, but to cache index scans
small sequential scans.
(4)what I think the problem arises because of default LRU page replacement
policy. So I think we have to make use of MRU or LRU-K page replacement
policies.
(5)But I am not sure and I wish more input into the problem description from
you all. I have started reading the buffer manager code and I found that
freelist.c may be needed to be modified and may be some other too since we
have to identify the large sequential scans.
Please help me out
Regards
Amit Kumar Khare
From | Date | Subject | |
---|---|---|---|
Next Message | Jean-Michel POURE | 2002-02-12 08:57:48 | Re: [HACKERS] Feature enhancement request : use of libgda in |
Previous Message | Christopher Kings-Lynne | 2002-02-12 06:29:30 | Re: [HACKERS] Feature enhancement request : use of libgda in |