From: | "Mike Sofen" <msofen(at)runbox(dot)com> |
---|---|
To: | "'pgsql-general'" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: UPDATE OR REPLACE? |
Date: | 2016-09-01 12:20:11 |
Message-ID: | 041f01d2044b$313bd9f0$93b38dd0$@runbox.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Sep 1, 2016 at 12:10 PM, dandl <david(at)andl(dot)org> wrote:
> Sqlite has options to handle an update that causes a duplicate key. Is
> there anything similar in Postgres?
> This is not an UPSERT. The scenario is an UPDATE that changes some key
> field so that there is now a duplicate key. In Sqlite this handled as:
> UPDATE OR IGNORE table SET <etc>
> UPDATE OR REPLACE table SET <etc>
>
> And so on
>
> See https://www.sqlite.org/lang_update.html.
>
> Can Postgres do this?
I would propose that this effectively violates referential integrity and shouldn't be a valid design pattern.
In my mind primary keys are supposed to be static, stable, non-volatile...aka predictable. It feels like an alien invading my schema, to contemplate such an activity. I hope PG never supports that.
Postgres allows developers incredible freedom to do really crazy things. That doesn't mean that they should.
Mike Sofen (USA)
From | Date | Subject | |
---|---|---|---|
Next Message | Igor Neyman | 2016-09-01 13:08:24 | Re: Clustered index to preserve data locality in a multitenant application? |
Previous Message | Tom Lane | 2016-09-01 11:48:53 | Re: Array element foreign keys |