Re: BUG #17266: could not truncate file "base/126370/130666" to 0 blocks: Permission denied

From: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
To: adam(at)drose-consulting(dot)com, pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: BUG #17266: could not truncate file "base/126370/130666" to 0 blocks: Permission denied
Date: 2021-11-05 06:07:08
Message-ID: 20211105.150708.133689831588998978.horikyota.ntt@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

At Wed, 03 Nov 2021 06:57:00 +0000, PG Bug reporting form <noreply(at)postgresql(dot)org> wrote in
> The following bug has been logged on the website:
>
> Bug reference: 17266
> Logged by: Adam Santiaji
> Email address: adam(at)drose-consulting(dot)com
> PostgreSQL version: 12.0
> Operating system: Window Server 2016 Standard
> Description:
>
> from log postgreSQL i found this error log from 2021-11-01
> 2021-11-01 08:55:47.110 +07 [6860] ERROR: could not truncate file
> "base/126370/130666" to 0 blocks: Permission denied
> 2021-11-01 08:55:47.110 +07 [6860] CONTEXT: automatic vacuum of table
> "ykkap.public.imw_t_thread"
> 2021-11-01 08:58:47.124 +07 [9604] ERROR: could not truncate file
> "base/126370/127251" to 0 blocks: Permission denied
> 2021-11-01 08:58:47.124 +07 [9604] CONTEXT: automatic vacuum of table
> "ykkap.public.im_async_task_info"
>
> and when i try to backup database on 2021-11-02 it show error
> 2021-11-02 21:21:36.970 +07 [10252] ERROR: missing chunk number 0 for toast
> value 250117 in pg_toast_127251
>
> what does it mean Permission denied? is it cause of permission access from
> windows?
> how to avoid this error log in future?

It's most likely that the truncate was competing with antivirus
software on the files. If that's the case excluding the database
directory in your antivirus setting would work and it is highly
recommended in any case. If that's the case this is not a bug but a
known (unwanted) behavior.

The second error seems to be the consequence of the first
failure. Namely the first failure left a broken toast pointer in table
im_async_task_info(127251) to a toast tuple in pg_toast_127251, which
should have been succsssfully removed. In short the database is
broken.

If the table has no significant data or no rows have been inserted
into it since the first failure, dropping the table then re-creating
it would work. If it is sufficient that the database can be restored
to the state before the first error occured, recoverying to that time
will also work.

Otherwise maybe you would need to find the rows that cause the error
then copy the other rows to a new table for replacement. You may be
able to utilize foreign constraints or other things to find them. Then
check the consistency in the data of the new table and replace the old
table with it. There might be a nicer way but I don't know of.

In all ways other than recovery, it is recommended to dump (not by
pg_basebackup) the all data then restore to a newly created database.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message Kyotaro Horiguchi 2021-11-05 08:17:36 Re: BUG #17269: Why is virtual memory usage of PostgreSQL growing constantly?
Previous Message Semab Tariq 2021-11-05 06:02:16 Re: CREATE INDEX CONCURRENTLY does not index prepared xact's data