From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: IO related waits |
Date: | 2024-09-20 21:04:24 |
Message-ID: | CANzqJaB28bd_DmMdztFbY8fqhgOXf8i5zLRz-8C7SvCfbuJA0Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Sep 20, 2024 at 4:47 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> veem v <veema0000(at)gmail(dot)com> writes:
> > Able to reproduce this deadlock graph as below. Now my question is ,
> this
> > is a legitimate scenario in which the same ID can get inserted from
> > multiple sessions and in such cases it's expected to skip that (thus "On
> > conflict Do nothing" is used) row. But as we see it's breaking the code
> > with deadlock error during race conditions where a lot of parallel
> threads
> > are operating. So how should we handle this scenario?
>
> Do you have to batch multiple insertions into a single transaction?
> If so, can you arrange to order them consistently across transactions
> (eg, sort by primary key before inserting)?
>
That's exactly what I did back in the day. Because of database buffering,
sorting the data file at the OS level made the job 3x as fast as when the
input data was random.
--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> crustacean!
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2024-09-20 21:11:38 | Re: IO related waits |
Previous Message | Tom Lane | 2024-09-20 20:47:08 | Re: IO related waits |