From: | Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com> |
---|---|
To: | Japin Li <japinli(at)hotmail(dot)com>, Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax |
Date: | 2021-04-02 19:59:13 |
Message-ID: | 864979ad-880d-0f3a-55a1-21de9a08b9a7@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 23.03.21 16:08, Japin Li wrote:
> I check the duplicates for newpublist in merge_publications(). The code is
> copied from publicationListToArray().
>
> I do not check for all duplicates because it will make the code more complex.
> For example:
>
> ALTER SUBSCRIPTION mysub ADD PUBLICATION mypub2, mypub2, mypub2;
>
> If we record the duplicate publication names in list A, when we find a
> duplication in newpublist, we should check whether the publication is
> in list A or not, to make the error message make sense (do not have
> duplicate publication names in error message).
The code you have in merge_publications() to report all existing
publications is pretty messy and is not properly internationalized. I
think what you are trying to do there is excessive. Compare this
similar case:
create table t1 (a int, b int);
alter table t1 add column a int, add column b int;
ERROR: 42701: column "a" of relation "t1" already exists
I think you can make both this and the duplicate checking much simpler
if you just report the first conflict.
I think this patch is about ready to commit, but please provide a final
version in good time.
(Also, please combine your patches into a single patch.)
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Pryzby | 2021-04-02 20:03:14 | Re: CLUSTER on partitioned index |
Previous Message | Andres Freund | 2021-04-02 19:44:58 | Making wait events a bit more efficient |