From: | Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com> |
---|---|
To: | Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Boundary value check in lazy_tid_reaped() |
Date: | 2021-01-20 06:50:17 |
Message-ID: | 70e1c1e9-b4b4-346f-e9fc-18d006bab444@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2020-10-30 02:43, Masahiko Sawada wrote:
> Using the integer set is very memory efficient (5MB vs. 114MB in the
> base case) and there is no 1GB limitation. Looking at the execution
> time, I had expected that using the integer set is slower on recording
> TIDs than using the simple array but in this experiment, there is no
> big difference among methods. Perhaps the result will vary if vacuum
> needs to record much more dead tuple TIDs. From the results, I can see
> a good improvement in the integer set case and probably a good start
> but if we want to use it also for lazy vacuum, we will need to improve
> it so that it can be used on DSA for the parallel vacuum.
>
> I've attached the patch I used for the experiment that adds xx_vacuum
> GUC parameter that controls the method of recording TIDs. Please note
> that it doesn't support parallel vacuum.
How do you want to proceed here? The approach of writing a wrapper for
bsearch with bound check sounded like a good start. All the other ideas
discussed here seem larger projects and would probably be out of scope
of this commit fest.
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2021-01-20 06:58:09 | Bug in error reporting for multi-line JSON |
Previous Message | Tatsuro Yamada | 2021-01-20 06:41:57 | Re: list of extended statistics on psql |