From: | yudhi s <learnerdatabase99(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: update faster way |
Date: | 2024-09-14 14:56:32 |
Message-ID: | CAEzWdqcgLj_vothSMttqN4N3Ly46O=MyhmUMvyQqfg6RiuYioQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sat, Sep 14, 2024 at 4:55 PM Peter J. Holzer <hjp-pgsql(at)hjp(dot)at> wrote:
>
> Which in turn means that you want as little overhead as possible per
> batch which means finding those 5000 rows should be quick. Which brings
> us back to Igor's question: Do you have any indexes in place which speed
> up finding those 5000 rows (the primary key almost certainly won't help
> with that). EXPLAIN (ANALYZE) (as suggested by Laurenz) will certainly
> help answering that question.
>
> > And also those rows will not collide with each other. So do you think
> > that approach can anyway cause locking issues?
>
> No, I don't think so. With a batch size that small I wouldn't expect
> problems even on the live partition. But of course many busy parallel
> sessions will put additional load on the system which may or may not be
> noticeable by users (you might saturate the disks writing WAL entries
> for example, which would slow down other sessions trying to commit).
>
>
> > Regarding batch update with batch size of 1000, do we have any method
> exists in
> > postgres (say like forall statement in Oracle) which will do the batch
> dml. Can
> > you please guide me here, how we can do it in postgres.
>
> Postgres offers several server side languages. As an Oracle admin you
> will probably find PL/pgSQL most familiar. But you could also use Perl
> or Python or several others. And of course you could use any
> programming/scripting language you like on the client side.
>
>
When you said *"(the primary key almost certainly won't help with that)", *I
am trying to understand why it is so ?
I was thinking of using that column as an incrementing filter and driving
the eligible rows based on that filter. And if it would have been a
sequence. I think it would have helped but in this case it's UUID , so I
may not be able to do the batch DML using that as filter criteria. but in
that case will it be fine to drive the update based on ctid something as
below? Each session will have the range of 5 days of data or five partition
data and will execute a query something as below which will update in the
batches of 10K and then commit. Is this fine? Or is there some better way
of doing the batch DML in postgres plpgsql?
DO $$
DECLARE
l_rowid_array ctid[];
l_ctid ctid;
l_array_size INT := 10000;
l_processed INT := 0;
BEGIN
FOR l_cnt IN 0..(SELECT COUNT(*) FROM part_tab WHERE part_date >
'1-sep-2024' and part_date < '5-sep-2024'
) / l_array_size LOOP
l_rowid_array := ARRAY(
SELECT ctid
FROM part_tab
WHERE part_date > '1-sep-2024' and part_date < '5-sep-2024'
LIMIT l_array_size OFFSET l_cnt * l_array_size
);
FOREACH l_ctid IN ARRAY l_rowid_array LOOP
update part_tab
SET column1 = reftab.code
FROM reference_tab reftab
WHERE tab_part1.column1 = reftab.column1
and ctid = l_ctid;
l_processed := l_processed + 1;
END LOOP;
COMMIT;
END LOOP;
END $$;
From | Date | Subject | |
---|---|---|---|
Next Message | yudhi s | 2024-09-14 15:51:45 | Re: Manual query vs trigger during data load |
Previous Message | Ron Johnson | 2024-09-14 14:47:28 | Re: update faster way |