From: | "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: update faster way |
Date: | 2024-09-14 11:24:51 |
Message-ID: | 20240914112451.bgxnbjv5b6unoijc@hjp.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2024-09-14 16:10:15 +0530, yudhi s wrote:
> On Sat, 14 Sept, 2024, 1:09 pm Laurenz Albe, <laurenz(dot)albe(at)cybertec(dot)at> wrote:
>
> On Sat, 2024-09-14 at 08:43 +0530, yudhi s wrote:
> > We have to update a column value(from numbers like '123' to codes like
> 'abc'
> > by looking into a reference table data) in a partitioned table with
> billions
> > of rows in it, with each partition having 100's millions rows. As we
> tested
>
> > for ~30million rows it's taking ~20minutes to update.
[...]
> > 2) should we run each individual partition in a separate session (e.g.
> five
> > partitions will have the updates done at same time from 5 different
> > sessions)? And will it have any locking effect or we can just start
> the
> > sessions and let them run without impacting our live transactions?
>
> Option 2 is possible, and you can even have more than one session workingr
> on a single partition.
>
> However, the strain on your system's resources and particularly the row
> locks will impair normal database work.
>
> Essentially, you can either take an extended down time or perform the
> updates
> in very small chunks with a very low "lock_timeout" over a very long period
> of time. If any of the batches fails because of locking conflicts, it has
> to be retried.
>
> Investigate with EXPLAIN (ANALYZE) why the updates take that long. It
> could
> be a lame disk, tons of (unnecessary?) indexes or triggers, but it might as
> well be the join with the lookup table, so perhaps there is room for
> improvement (more "work_mem" for a hash join?)
>
>
>
> Thank you so much Laurenz.
>
> We have mostly insert/update happen on current day/live partition. So
> considering that, if we will run batch updates(with batch size of 1000) from
> five different sessions in parallel on different historical partition, at any
> time they will lock 5000 rows and then commit.
If you are updating billions of rows in batches of 5000, that means you
are executing hundreds of thousands or millions of update statements.
Which in turn means that you want as little overhead as possible per
batch which means finding those 5000 rows should be quick. Which brings
us back to Igor's question: Do you have any indexes in place which speed
up finding those 5000 rows (the primary key almost certainly won't help
with that). EXPLAIN (ANALYZE) (as suggested by Laurenz) will certainly
help answering that question.
> And also those rows will not collide with each other. So do you think
> that approach can anyway cause locking issues?
No, I don't think so. With a batch size that small I wouldn't expect
problems even on the live partition. But of course many busy parallel
sessions will put additional load on the system which may or may not be
noticeable by users (you might saturate the disks writing WAL entries
for example, which would slow down other sessions trying to commit).
> Regarding batch update with batch size of 1000, do we have any method exists in
> postgres (say like forall statement in Oracle) which will do the batch dml. Can
> you please guide me here, how we can do it in postgres.
Postgres offers several server side languages. As an Oracle admin you
will probably find PL/pgSQL most familiar. But you could also use Perl
or Python or several others. And of course you could use any
programming/scripting language you like on the client side.
hp
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Johnson | 2024-09-14 14:47:28 | Re: update faster way |
Previous Message | Peter J. Holzer | 2024-09-14 10:47:13 | Re: Manual query vs trigger during data load |