Re: IO related waits

From: rob stone <floriparob(at)tpg(dot)com(dot)au>
To: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, veem v <veema0000(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: IO related waits
Date: 2024-09-23 04:43:46
Message-ID: dbeae9e0004fa4b5d45f15f13d7ac4b7f6db4bf7.camel@tpg.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello,

On Sat, 2024-09-21 at 10:20 -0700, Adrian Klaver wrote:
> On 9/21/24 02:36, veem v wrote:
> >
> >
>
> >
> > Actually here the application is using kafka and  flink stream and
> > is
> > using one of existing code in which it was doing row by row commit
> > which
> > is now changed to Batch commit i.e. the commit point is shifted
> > from row
> > by row to batch now. There are multiple sessions spawning at the
> > same
> > time to process the incoming messages 24/7. And also as described
> > in
> > another ongoing thread and also we saw in the past we did not see
> > much
> > difference between "batch commit" and "Batch insert" performance.
> > We
> > only realized the row by row commit is having worse performance.
>
> The bottom line is that to solve this a cost is going to have to be
> paid
> somewhere. Previously  it was done with autocommit in the form of
> slow
> insert performance. You improved the speed of the inserts by wrapping
> multiple inserts in transactions and that led you to this problem,
> where
> open transactions across sessions is leading to deadlock issues due
> to
> the same id being inserted in concurrent open sessions. Per my and
> Greg
> Sabino Mullane comments the solution is going to need planning. Right
> now you are playing a round of Whack-a-Mole by making ad-hoc changes
> of
> portions of the process without reference to the process as a
> whole.At
> some point the parallel duplicate ids(records) need to be
> straightened
> out into a serial application of data. You and the other people
> involved
> need to come up with a coherent view of the process as whole with a
> goal
> to achieving that. Then you can start planning on where that cost is
> best paid: 1) In the data before the streaming. 2) In the streaming
> process itself 3) In the database or 4) Spread out across 1-4.
>

You are using Apache's flink to pump data into your database. It seems
to me that you have this occurring:-

pipelineA --> sessionA
pipelineB --> sessionB
etc.

You haven't said if the SQL code doing the inserts is coming out of
flink or if it is your own code.
If it is your own code make sure you are handling SQLException events
properly.

If there is a flink mailing list or user group, you'll probably find
more help with other flink users.

Follow Adrian's advice.

HTH,
Rob

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Dominique Devienne 2024-09-23 09:30:46 Customize psql prompt to show current_role
Previous Message Ramakrishna m 2024-09-23 04:32:02 Re: Logical Replication Delay