Re: BUG #18166: 100 Gb 18000000 records table update

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: ruslan(dot)ganeev(at)list(dot)ru
Cc: pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: BUG #18166: 100 Gb 18000000 records table update
Date: 2023-10-21 01:22:40
Message-ID: 4020107.1697851360@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

PG Bug reporting form <noreply(at)postgresql(dot)org> writes:
> We tried to make a script, which sets enddate = '2022-12-31' for all
> records, having the value in «DataVip» that is not maximal. For other
> records the script set s enddate = null
> The problem is that the script is running for 6 hours, the main percentage
> of time is taken by the rebuilding of indexes.

This is not a bug. However ... a common workaround for bulk updates
like that is to drop all the table's indexes and then recreate them
afterwards. It's often quicker than doing row-by-row index updates.

regards, tom lane

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Thomas Munro 2023-10-21 01:34:00 Re: BUG #18165: Could not duplicate handle for "Global/PostgreSQL.xxxxxxxxxx": Bad file descriptor
Previous Message Tom Lane 2023-10-21 01:19:37 Re: BUG #18165: Could not duplicate handle for "Global/PostgreSQL.xxxxxxxxxx": Bad file descriptor