From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Victor Yegorov <vyegorov(at)gmail(dot)com> |
Cc: | Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Deleting older versions in unique indexes to avoid page splits |
Date: | 2021-01-18 12:45:10 |
Message-ID: | CAA4eK1KYggDtRXbgGRea1Z1uM24Dmsg3zxePTKmNf-a+5xrSFQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jan 18, 2021 at 5:11 PM Victor Yegorov <vyegorov(at)gmail(dot)com> wrote:
>
> пн, 18 янв. 2021 г. в 07:44, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>:
>>
>> The summary of the above is that with Case-1 (clean-up based on hint
>> received with the last item on the page) it takes fewer operations to
>> cause a page split as compared to Case-2 (clean-up based on hint
>> received with at least of the items on the page) for a mixed workload.
>> How can we say that it doesn't matter?
>
>
> I cannot understand this, perhaps there's a word missing in the brackets?..
>
There is a missing word (one) in Case-2, let me write it again to
avoid any confusion. Case-2 (clean-up based on hint received with at
least one of the items on the page).
>
> Thinking of your proposal to undertake bottom-up deletion also on the last-to-fit tuple insertion,
>
I think there is some misunderstanding in what I am trying to say and
your conclusion of the same. See below.
> I'd like to start with my understanding of the design:
>
> * we try to avoid index bloat, but we do it in the “lazy” way (for a reason, see below)
>
> * it means, that if there is enough space on the leaf page to fit new index tuple,
> we just fit it there
>
> * if there's not enough space, we first look at the presence of LP_DEAD tuples,
> and if they do exits, we scan the whole (index) page to re-check all tuples in order
> to find others, not-yet-marked-but-actually-being-LP_DEAD tuples and clean all those up.
>
> * if still not enough space, only now we try bottom-up-deletion. This is heavy operation and
> can cause extra IO (tests do show this), therefore this operation is undertaken at the point,
> when we can justify extra IO against leaf page split.
>
> * if no space also after bottom-up-deletion, we perform deduplication (if possible)
>
> * finally, we split the page.
>
> Should we try bottom-up-deletion pass in a situation where we're inserting the last possible tuple
> into the page (enough space for *current* insert, but likely no space for the next),
>
I am saying that we try bottom-up deletion when the new insert item
didn't find enough space on the page and there was previously some
indexUnchanged tuple(s) inserted into that page. Of course, like now
it will be attempted after an attempt to remove LP_DEAD items.
> then (among others) exists the following possibilities:
>
> - no new tuples ever comes to this page anymore (happy accident), which means that
> we've wasted IO cycles
>
> - we undertake bottom-up-deletion pass without success and we're asked to insert new tuple in this
> fully packed index page. This can unroll to:
> - due to the delay, we've managed to find some space either due to LP_DEAD or bottom-up-cleanup.
> which means we've wasted IO cycles on the previous iteration (too early attempt).
> - we still failed to find any space and are forced to split the page.
> in this case we've wasted IO cycles twice.
>
> In my view these cases when we generated wasted IO cycles (producing no benefit) should be avoided.
>
I don't think any of these can happen in what I am actually saying. Do
you still have the same feeling after reading this email? Off-hand, I
don't see any downsides as compared to the current approach and it
will have fewer splits in some other workloads like when there is a
mix of inserts and non-HOT updates (that doesn't logically change the
index).
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2021-01-18 12:47:15 | Re: [PATCH] postgres_fdw connection caching - cause remote sessions linger till the local session exit |
Previous Message | Heikki Linnakangas | 2021-01-18 12:40:02 | Re: Yet another fast GiST build |