From: | veem v <veema0000(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: IO related waits |
Date: | 2024-09-21 09:36:45 |
Message-ID: | CAB+=1TU7FcET2RDV0W969B3q0UB3xwiEVdPmqLybBmfqU2VC8w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sat, 21 Sept 2024 at 03:47, Peter J. Holzer <hjp-pgsql(at)hjp(dot)at> wrote:
> On 2024-09-20 14:11:38 -0700, Adrian Klaver wrote:
> > On 9/20/24 1:01 PM, veem v wrote:
> > > Able to reproduce this deadlock graph as below. Now my question is ,
> > > this is a legitimate scenario in which the same ID can get inserted
> from
> > > multiple sessions and in such cases it's expected to skip that (thus
> "On
> > > conflict Do nothing" is used) row. But as we see it's breaking the code
> >
> > Yeah, as I see it that would not work with concurrent uncommitted
> sessions
> > as it would be unresolved whether a conflict actually exists until at
> least
> > one of the sessions completes.
> >
> > > with deadlock error during race conditions where a lot of parallel
> > > threads are operating. So how should we handle this scenario? Will
> > > setting the "lock_timeout" parameter at session level will help us
> > > anyway here?
> >
> > Serializable transaction?:
> >
> >
> https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE
>
> Doesn't help here, at least not directly. It would help indirectly
> because isolation level serializable makes it very proable that
> serialization errors occur. So an application designed for serializable
> would have some kind of retry logic already in place.
>
> SO that leads as to another solution:
>
> Retry each batch (possibly after reducing the batch size) until it
> succeeds.
>
>
Actually here the application is using kafka and flink stream and is using
one of existing code in which it was doing row by row commit which is now
changed to Batch commit i.e. the commit point is shifted from row by row to
batch now. There are multiple sessions spawning at the same time to process
the incoming messages 24/7. And also as described in another ongoing thread
and also we saw in the past we did not see much difference between "batch
commit" and "Batch insert" performance. We only realized the row by row
commit is having worse performance.
Now, in such a scenario when the messages are coming from a streaming
framework using kafka +flink and also the insert is happening using row by
row only (but just the commit is shifted to batch), i don't see any way to
sort the ID columns in this streaming process, so that they won't overlap
across session.
In such a situation , the only way seems to have the messages replayed for
which the deadlock error happens , as I think during a deadlock error, one
session gets terminated by postgres and that messages perhap we can save in
some exception table and then replay?
From | Date | Subject | |
---|---|---|---|
Next Message | Lok P | 2024-09-21 11:14:08 | Re: How batch processing works |
Previous Message | Dan Kortschak | 2024-09-21 06:52:36 | Re: re-novice coming back to pgsql: porting an SQLite update statement to postgres |