From: | yudhi s <learnerdatabase99(at)gmail(dot)com> |
---|---|
To: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
Cc: | pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: update faster way |
Date: | 2024-09-14 10:40:15 |
Message-ID: | CAEzWdqcxzKOMMe3LfTjfnOXwhRZNyci-aMO0ko4HYYAs8yYAFA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sat, 14 Sept, 2024, 1:09 pm Laurenz Albe, <laurenz(dot)albe(at)cybertec(dot)at>
wrote:
> On Sat, 2024-09-14 at 08:43 +0530, yudhi s wrote:
> > We have to update a column value(from numbers like '123' to codes like
> 'abc'
> > by looking into a reference table data) in a partitioned table with
> billions
> > of rows in it, with each partition having 100's millions rows. As we
> tested
>
> > for ~30million rows it's taking ~20minutes to update. So if we go by this
> > calculation, it's going to take days for updating all the values. So my
> > question is
> >
> > 1) If there is any inbuilt way of running the update query in parallel
> > (e.g. using parallel hints etc) to make it run faster?
> > 2) should we run each individual partition in a separate session (e.g.
> five
> > partitions will have the updates done at same time from 5 different
> > sessions)? And will it have any locking effect or we can just start
> the
> > sessions and let them run without impacting our live transactions?
>
> Option 1 doesn't exist.
> Option 2 is possible, and you can even have more than one session workingr
> on a single partition.
>
> However, the strain on your system's resources and particularly the row
> locks will impair normal database work.
>
> Essentially, you can either take an extended down time or perform the
> updates
> in very small chunks with a very low "lock_timeout" over a very long period
> of time. If any of the batches fails because of locking conflicts, it has
> to be retried.
>
> Investigate with EXPLAIN (ANALYZE) why the updates take that long. It
> could
> be a lame disk, tons of (unnecessary?) indexes or triggers, but it might as
> well be the join with the lookup table, so perhaps there is room for
> improvement (more "work_mem" for a hash join?)
>
Thank you so much Laurenz.
We have mostly insert/update happen on current day/live partition. So
considering that, if we will run batch updates(with batch size of 1000)
from five different sessions in parallel on different historical partition,
at any time they will lock 5000 rows and then commit. And also those rows
will not collide with each other. So do you think that approach can anyway
cause locking issues? We will ensure the update of live partition occurs
when we have least activity. So in that way we will not need extended down
time. Please correct me if wrong.
Never used lock_timeout though, but in above case do we need lock_timeout?
Regarding batch update with batch size of 1000, do we have any method
exists in postgres (say like forall statement in Oracle) which will do the
batch dml. Can you please guide me here, how we can do it in postgres.
And yes will need to see what happens in the update using explain analyze.
And I was trying to see, if we can run explain analyze without doing actual
update , but seems that is not possible.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Peter J. Holzer | 2024-09-14 10:47:13 | Re: Manual query vs trigger during data load |
Previous Message | Dan Kortschak | 2024-09-14 10:30:08 | Re: re-novice coming back to pgsql: porting an SQLite update statement to postgres |