From: | Nagaraj Raj <nagaraj(dot)sf(at)yahoo(dot)com> |
---|---|
To: | <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Issue with Running VACUUM on Database with Large Tables |
Date: | 2023-12-25 12:10:40 |
Message-ID: | 1237927313.5086260.1703506240368@mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Hello,
While executing a vacuum analyze on our database containing large tables (approximately 200k), I encountered an issue. If a table gets dropped during the vacuum process, the vacuum job fails at that point with an error message stating "OID relation is not found" and exits. This behavior seems to interrupt the entire process without providing a warning or handling the error gracefully.
Considering the possibility of dynamic objects within our database design, this abrupt termination makes it challenging to complete the vacuum process successfully. This issue has persisted across multiple versions, including the current version we're using (14.8).
Is this behavior expected or could it possibly be a bug? It would be beneficial to have a mechanism in place to handle such instances, perhaps by providing a notice or warning when encountering dropped tables, allowing the process to skip those tables and continue with the rest of the vacuum analyze.
Your support in resolving this matter is greatly appreciated.
Thanks,
Rj
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2023-12-25 14:53:25 | Re: Issue with Running VACUUM on Database with Large Tables |
Previous Message | Michael Paquier | 2023-12-25 09:57:57 | Re: BUG #18240: Undefined behaviour in cash_mul_flt8() and friends |