From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: decoupling table and index vacuum |
Date: | 2021-05-06 06:38:04 |
Message-ID: | CAFiTN-uk=xr8_TZwo6KHNpkPqj0hp7TDyQ1owubqy+yMCavkvA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, May 6, 2021 at 8:27 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>
> > I'm doubtful about skipping WAL logging entirely - I'd have to think
> > harder about it, but I think that'd mean we'd restart from scratch after
> > crashes / immediate restarts as well, because we couldn't rely on the
> > contents of the "dead tid" files to be accurate. In addition to the
> > replication issues you mention.
>
> Yeah, not having WAL would have a big negative impact on other various
> aspects. Can we piggyback the WAL for the TID fork and
> XLOG_HEAP2_PRUNE? That is, we add the buffer for the TID fork to
> XLOG_HEAP2_PRUNE and record one 64-bit number of the first dead TID in
> the list so that we can add dead TIDs to the TID fork during replaying
> XLOG_HEAP2_PRUNE.
That could be an option but we need to be careful about the buffer
lock order because now we will have to hold the lock on the TID fork
buffer as well as the heap buffer so that we don't create any
deadlock. And there is also a possibility of holding the lock on
multiple TID fork buffers, which will depend upon how many tid we have
pruned.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Etsuro Fujita | 2021-05-06 06:45:06 | Re: Asynchronous Append on postgres_fdw nodes. |
Previous Message | Etsuro Fujita | 2021-05-06 06:25:25 | Re: Asynchronous Append on postgres_fdw nodes. |