From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | Israel Brewster <ijbrewster(at)alaska(dot)edu> |
Cc: | Rob Sargent <robjsargent(at)gmail(dot)com>, Alban Hertroys <haramrae(at)gmail(dot)com>, Christopher Browne <cbbrowne(at)gmail(dot)com>, "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: UPDATE many records |
Date: | 2020-01-06 20:54:35 |
Message-ID: | CAHOFxGp6akB5MXfah7PrLbU+tCYe6g1Rua_y3e3usw-9vyZqrw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>
> I’m thinking it might be worth it to do a “quick” test on 1,000 or so
> records (or whatever number can run in a minute or so), watching the
> processor utilization as it runs. That should give me a better feel for
> where the bottlenecks may be, and how long the entire update process would
> take. I’m assuming, of course, that the total time would scale more or less
> linearly with the number of records.
>
I think that depends on how your identify and limit the update to those
1000 records. If it is using a primary key with specific keys in an array,
probably close to linear increase because the where clause isn't impactful
to the overall execution time. If you write a sub-query that is slow, then
you would need to exclude that from the time. You can always run explain
analyze on the update and rollback rather than commit.
From | Date | Subject | |
---|---|---|---|
Next Message | Israel Brewster | 2020-01-06 21:23:48 | Re: UPDATE many records |
Previous Message | Israel Brewster | 2020-01-06 20:51:20 | Re: UPDATE many records |