Re: Batch update million records in prd DB

From: Yi Sun <yinan81(at)gmail(dot)com>
To: Michael Lewis <mlewis(at)entrata(dot)com>
Cc: PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Batch update million records in prd DB
Date: 2021-02-25 12:36:26
Message-ID: CABWY_HCkr_4xRKFr08eB6zbJSgSV4aTQQkVt3CYJYwR5Uv4dJw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi Michael,

Thank you for your reply

We found that each loop take time is different, it will become slower and
slower, as our table is big table and join other table, even using index
the last 1000 records take around 15 seconds, will it be a problem? Will
other concurrent update have to wait for 15 second until lock release?

Thanks and best regards

Michael Lewis <mlewis(at)entrata(dot)com> 于2021年2月24日周三 下午11:47写道:

> Of course it will impact a system using that table, but not significant I
> expect and the production system should handle it. If you are committing
> like this, then you can kill the script at any time and not lose any work.
> The query to find the next IDs to update is probably the slowest part of
> this depending on what indexes you have.
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alexander Farber 2021-02-25 13:05:45 Re: Deleting takes days, should I add some index?
Previous Message Yambu 2021-02-25 11:31:29 Re: converting text to bytea