From: | Lars Aksel Opsahl <Lars(dot)Opsahl(at)nibio(dot)no> |
---|---|
To: | Rick Otten <rottenwindfish(at)gmail(dot)com> |
Cc: | "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: PostgreSQL and a Catch-22 Issue related to dead rows |
Date: | 2024-12-10 05:19:24 |
Message-ID: | AM7P189MB102832DD646EF0D81EB710DB9D3D2@AM7P189MB1028.EURP189.PROD.OUTLOOK.COM |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
From: Rick Otten <rottenwindfish(at)gmail(dot)com>
Sent: Monday, December 9, 2024 3:25 PM
To: Lars Aksel Opsahl <Lars(dot)Opsahl(at)nibio(dot)no>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: PostgreSQL and a Catch-22 Issue related to dead rows
Yes there are very good reason for the way removal for dead rows work now, but is there any chance of adding an option when creating table to disable this behavior for instance for unlogged tables ?
Are you saying your job is I/O bound (not memory or cpu). And that you can only improve I/O performance by committing more frequently because the commit removes dead tuples which you have no other means to clear? Is your WAL already on your fastest disk?
All of your parallel jobs are operating on the same set of rows? So partitioning the table wouldn't help?
The problem is not IO or CPU bound, or related to WAL files, but that "dead rows" are impacting the sql queries. About partitioning at this stage, the data are split in about 750 different topology structures. We have many workers working in parallel on these different structures but only one worker at the same on the same structure.
Thanks
Lars
From | Date | Subject | |
---|---|---|---|
Next Message | Lars Aksel Opsahl | 2024-12-10 05:32:44 | Re: PostgreSQL and a Catch-22 Issue related to dead rows |
Previous Message | jian he | 2024-12-10 03:32:37 | Re: proposal: schema variables |