From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Peter Geoghegan <pg(at)bowt(dot)ie>, Victor Yegorov <vyegorov(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Deleting older versions in unique indexes to avoid page splits |
Date: | 2021-01-20 13:46:00 |
Message-ID: | CAA4eK1JFOFsx+Ma6M5zeH1H0mBL=QdOw8f4vJy81gtmRmShi=Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jan 20, 2021 at 10:50 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2021-01-20 09:24:35 +0530, Amit Kapila wrote:
> > I feel extending the deletion mechanism based on the number of LP_DEAD
> > items sounds more favorable than giving preference to duplicate
> > items. Sure, it will give equally good or better results if there are
> > no long-standing open transactions.
>
> There's a lot of workloads that never set LP_DEAD because all scans are
> bitmap index scans. And there's no obvious way to address that. So I
> don't think it's wise to purely rely on LP_DEAD.
>
Right, I understand this point. The point I was trying to make was
that in this new technique we might not be able to delete any tuple
(or maybe very few) if there are long-running open transactions in the
system and still incur a CPU and I/O cost. I am completely in favor of
this technique and patch, so don't get me wrong. As mentioned in my
reply to Peter, I am just trying to see if there are more ways we can
use this optimization and reduce the chances of regression (if there
is any).
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2021-01-20 13:50:47 | Re: Parallel INSERT (INTO ... SELECT ...) |
Previous Message | vignesh C | 2021-01-20 13:45:49 | Re: Printing backtrace of postgres processes |