Re: INSERT ... ON CONFLICT DO UPDATE

From: Deven Phillips <deven(dot)phillips(at)gmail(dot)com>
To: Rafal Pietrak <rafal(at)ztk-rp(dot)eu>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: INSERT ... ON CONFLICT DO UPDATE
Date: 2015-07-20 11:38:15
Message-ID: CAJw+4NDPwA5GSEyZPhPn9a-h+5FKEVypbKckCA4uSELd_ZhAxw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

For this purpose, I have seen it recommended to use a UUID instead of a
randomly generated integer. I do this myself for production applications
and over millions of records I have yet to log a conflict. Also, as stated
above, you could create a plpgsql function which would achieve exactly what
you want (retry insert until it succeeds).

Just my 2 cents,

Deven

On Sun, Jul 19, 2015 at 9:47 AM, Rafal Pietrak <rafal(at)ztk-rp(dot)eu> wrote:

> Hi,
>
> W dniu 19.07.2015 o 14:10, Geoff Winkless pisze:
> > On 19 July 2015 at 11:30, Rafal Pietrak <rafal(at)ztk-rp(dot)eu
> > <mailto:rafal(at)ztk-rp(dot)eu>> wrote:
> >
> > when I have to invent/introduce additional
> > features/columns/attributes (like a key in addition to a sequence),
> > which are not required by the design, but necessary for
> implementation)
> > is a workaround (almost by definition).
> >
> >
> > I'm sorry that you feel defensive about this, and apologies for
> > repeating myself, but the fact that the random key can be duplicated
> > means it should not be used as a primary key, so using a sequence as a
> > primary key is not a workaround, it's a correction to the design.
>
> OK. I think I need to apology myself, too. I hope my defense wasn't too
> fierce.
>
> But I need to clearify one thing:
>
> Although "a random" can duplicate its previous values, "my random(s)"
> (which are created for this application purpose) cannot be duplicated
> when it's stored in the database as "live active data". I understand,
> that UNIQUE constraint is precisely the RDBMS tool to guarantee that.
>
> Naturally, if I put a UNIQUE constraint on that column, or make it a PK,
> is just a matter of choice here. That shouldn't rise concern. I just use
> tools RDBMS provides for "semantics" the application needs.
>
>
> >
> > Notwithstanding that, the reason UPSERT is required is because it's
> > possible that two competing transactions can end up fighting over an
> > INSERT and the workarounds that are required are either highly complex
> > or not 100% successful (eg
> > http://www.depesz.com/2012/06/10/why-is-upsert-so-complicated/)
> >
>
> I knew that Depesz publication before.
>
> Actually it was the reason I've brought up "my usage scenario" here now.
> I'm not as competent as Depesz, so:
>
> 1. I worry, that while restarting a failed INSERT transaction at
> application level I miss something important (you people know by heart)
> and unwillingly corrupt and/or "suboptimise" my application/data. (much
> to the point Depesz described).
>
> 2. But, since the majority of the hard work of postgresql UPSERT
> implementation is already done; I wanted to check out if the usage
> scenario I point out falls into it as a "case", or is covered by it by
> some "indiomatic SQL sequence", or otherwise. From current discussion I
> gather: "its otherwise" - it isn't considered as applicable. (so I
> concluded: I'll live with manual re-attempt of failed insert)
>
> -R
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Andres Freund 2015-07-20 12:01:42 Re: Lots of stuck queries after upgrade to 9.4
Previous Message Andres Freund 2015-07-20 11:27:12 Re: Lots of stuck queries after upgrade to 9.4