From: | Vallimaharajan G <vallimaharajan(dot)gs(at)zohocorp(dot)com> |
---|---|
To: | "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org> |
Cc: | "zlabs-cstore(at)zohocorp(dot)com" <zlabs-cstore(at)zohocorp(dot)com>, "pgsql-hackers" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "pgsql-bugs" <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | [Bug] Heap Use After Free in parallel_vacuum_reset_dead_items Function |
Date: | 2024-11-25 18:27:07 |
Message-ID: | 1936493cc38.68cb2ef27266.7456585136086197135@zohocorp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-hackers |
Hi Developers,
We have discovered a bug in the parallel_vacuum_reset_dead_items function in PG v17.2. Specifically:
TidStoreDestroy(dead_items) frees the dead_items pointer.
The pointer is reinitialized using TidStoreCreateShared().
However, the code later accesses the freed pointer instead of the newly reinitialized pvs->dead_items, as seen in these lines:
pvs->shared->dead_items_dsa_handle = dsa_get_handle(TidStoreGetDSA(dead_items));
pvs->shared->dead_items_handle = TidStoreGetHandle(dead_items);
This can lead to undefined behaviour or crashes due to the use of invalid memory.
Caught this issue while running the existing regression tests from vacuum_parallel.sql with our custom malloc allocator implementation.
For your reference, we have previously shared our custom malloc allocator implementation in a separate bug fix. (message ID: ).
Failed regression:
SET max_parallel_maintenance_workers TO 4;
SET min_parallel_index_scan_size TO '128kB';
CREATE TABLE parallel_vacuum_table (a int) WITH (autovacuum_enabled = off);
INSERT INTO parallel_vacuum_table SELECT i from generate_series(1, 10000) i;
CREATE INDEX regular_sized_index ON parallel_vacuum_table(a);
CREATE INDEX typically_sized_index ON parallel_vacuum_table(a);
CREATE INDEX vacuum_in_leader_small_index ON parallel_vacuum_table((1));
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
Please let me know if you have any questions or would like further details.
Thanks & Regards,
Vallimaharajan G
Member Technical Staff
ZOHO Corporation
Attachment | Content-Type | Size |
---|---|---|
parallel_vacuum_fix.patch | application/octet-stream | 1.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-11-25 19:16:11 | Re: BUG #18722: Processing arrays with plpgsql raises errors |
Previous Message | Dean Rasheed | 2024-11-25 17:59:05 | Re: BUG #18722: Processing arrays with plpgsql raises errors |
From | Date | Subject | |
---|---|---|---|
Next Message | Nathan Bossart | 2024-11-25 19:12:52 | Re: Statistics Import and Export |
Previous Message | Kirill Reshke | 2024-11-25 18:17:06 | Use streaming read API in pgstattuple. |