Re: Tuple concurrency issue in large objects

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Justin <zzzzz(dot)graf(at)gmail(dot)com>
Cc: Daniel Verite <daniel(at)manitou-mail(dot)org>, shalini(at)saralweb(dot)com, Rene Romero Benavides <rene(dot)romero(dot)b(at)gmail(dot)com>, Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Tuple concurrency issue in large objects
Date: 2019-12-18 17:12:07
Message-ID: 7594.1576689127@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Justin <zzzzz(dot)graf(at)gmail(dot)com> writes:
> I now see what is causing this specific issue...
> The update and row versions is happening on 2kb chunk at a time, That's
> going to make tracking what other clients are doing a difficult task.

Yeah, it's somewhat unfortunate that the chunkiness of the underlying
data storage becomes visible to clients if they try to do concurrent
updates of the same large object. Ideally you'd only get a concurrency
failure if you tried to overwrite the same byte(s) that somebody else
did, but as it stands, modifying nearby bytes might be enough --- or
not, if there's a chunk boundary between.

On the whole, though, it's not clear to me why concurrent updates of
sections of large objects is a good application design. You probably
ought to rethink how you're storing your data.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Justin 2019-12-18 17:16:09 Re: Tuple concurrency issue in large objects
Previous Message Justin 2019-12-18 16:43:50 Re: Tuple concurrency issue in large objects