From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] [PATCH] Vacuum: Update FSM more frequently |
Date: | 2018-02-02 00:34:20 |
Message-ID: | CAD21AoCym-DVTdt1nRWtbKuav4WbYQ9k5=5xrjSTUgkffBxqOg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jan 29, 2018 at 11:31 PM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> On Mon, Jan 29, 2018 at 4:12 AM, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>> On Sat, Jul 29, 2017 at 9:42 AM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
>>> Introduce a tree pruning threshold to FreeSpaceMapVacuum that avoids
>>> recursing into branches that already contain enough free space, to
>>> avoid having to traverse the whole FSM and thus induce quadratic
>>> costs. Intermediate FSM vacuums are only supposed to make enough
>>> free space visible to avoid extension until the final (non-partial)
>>> FSM vacuum.
>>
>> Hmm, I think this resolve a part of the issue. How about calling
>> AutoVacuumRequestWork() in PG_CATCH() if VACOPT_VACUUM is specified
>> and give the relid that we were vacuuming but could not complete as a
>> new autovacuum work-item? The new autovacuum work-item makes the
>> worker vacuum FSMs of the given relation and its indices.
>
> Well, I tried that in fact, as I mentioned in the OP.
>
> I abandoned it due to the conjunction of the 2 main blockers I found
> and mentioned there. In essence, those issues defeat the purpose of
> the patch (to get the free space visible ASAP).
>
> Don't forget, this is aimed at cases where autovacuum of a single
> relation takes a very long time. That is, very big relations. Maybe
> days, like in my case. A whole autovacuum cycle can take weeks, so
> delaying FSM vacuum that much is not good, and using work items still
> cause those delays, not to mention the segfaults.
Yeah, I agree to vacuum fsm more frequently because it can prevent
table bloating from concurrent modifying. But considering the way to
prevent the table bloating from cancellation of autovacuum, I guess we
need more things. This proposal seems to provide us an ability that is
we "might be" able to prevent table bloating due to cancellation of
autovacuum. Since we can know that autovacuum is cancelled, I'd like
to have a way so that we can surely vacuum fsm even if vacuum is
cancelled. Thoughts?
Also the patch always vacuums fsm at beginning of the vacuum with a
threshold but it is useless if the table has been properly vacuumed. I
don't think it's good idea to add an additional step that "might be"
efficient, because current vacuum is already heavy.
>
>> That way, we
>> can ensure that FSM gets vacuumed by the cancelled autovacuum process
>> or other autovacuum processes. Since a work-item can be handled by
>> other autovacuum process I think 256 work-item limit would not be a
>> problem.
>
> Why do you think it wouldn't? In particular if you take into account
> the above. If you have more than 256 relations in the cluster, it
> could very well happen that you've queued the maximum amount and no
> autovac worker has had a chance to take a look at them, because
> they're all stuck vacuuming huge relations.
>
> Not to mention the need to de-duplicate work items. We wouldn't want
> to request repeated FSM vacuums, or worst, queue an FSM vacuum of a
> single table 256 times and fill up the queue with redundant items.
> With the current structure, de-duplication is O(N), so if we wanted to
> raise the limit of 256 work items, we'd need a structure that would
> let us de-duplicate in less than O(N). In essence, it's a ton of work
> for very little gain. Hence why I abandoned it.
I'd missed something. Agreed with you.
On detail of the patch,
--- a/src/backend/storage/freespace/indexfsm.c
+++ b/src/backend/storage/freespace/indexfsm.c
@@ -70,5 +70,5 @@ RecordUsedIndexPage(Relation rel, BlockNumber usedBlock)
void
IndexFreeSpaceMapVacuum(Relation rel)
{
- FreeSpaceMapVacuum(rel);
+ FreeSpaceMapVacuum(rel, 0);
}
@@ -816,11 +820,19 @@ fsm_vacuum_page(Relation rel, FSMAddress addr,
bool *eof_p)
{
int child_avail;
+ /* Tree pruning for partial vacuums */
+ if (threshold)
+ {
+ child_avail = fsm_get_avail(page, slot);
+ if (child_avail >= threshold)
+ continue;
+ }
Don't we skip all fsm pages if we set the threshold to 0?
Regards,
--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | PG Bug reporting form | 2018-02-02 00:56:39 | BUG #15044: materialized views incompatibility with logical replication in postgres 10 |
Previous Message | Simon Riggs | 2018-02-02 00:21:49 | Re: Changing WAL Header to reduce contention during ReserveXLogInsertLocation() |