| From: | Michał Kłeczek <michal(at)kleczek(dot)org> |
|---|---|
| To: | Marcelo Fernandes <marcefern7(at)gmail(dot)com> |
| Cc: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Best Approach for Swapping a Table with its Copy |
| Date: | 2025-02-14 06:01:55 |
| Message-ID: | 44BB6A74-0AB1-462C-B13A-2CF106DA70D6@kleczek.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> On 12 Feb 2025, at 22:27, Marcelo Fernandes <marcefern7(at)gmail(dot)com> wrote:
>
> On Thu, Feb 13, 2025 at 10:02 AM Adrian Klaver
>
> Also pulling in your question in the other reply:
>
>> Why can't you just add the exclusion constraint to the original table?
>
> With unique constraints, one can use a unique index to create the constraint
> concurrently.
>
> With check constraints, one can create the constraint as invalid and then
> validate it while only requiring a share update exclusive lock.
>
> But with exclusion constraints, neither of those techniques are available. In
> that sense, there is no way to create this type of constraint in a large table
> without copying the original table, adding the constraint, and performing a
> table swap.
>
> This is done to avoid having to hold an exclusive lock for a long amount of
> time, thus creating application outages.
>
Just a wild idea (not sure if anyone suggested it in this thread and not sure if it is doable):
Create index concurrently and then fiddle with the catalog tables to define the constraint using this index?
—
Michal
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Allan Kamau | 2025-02-14 09:34:00 | Re: Ideas about presenting data coming from sensors |
| Previous Message | Laurenz Albe | 2025-02-14 05:41:07 | Re: Best Approach for Swapping a Table with its Copy |