From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Add CREATE support to event triggers |
Date: | 2014-01-06 17:51:26 |
Message-ID: | CA+Tgmoaev4apWpLDZE7etvHfQHrFv8Yjfc9J8pEEoPjNkxcrFQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jan 3, 2014 at 9:05 AM, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:
>> The other thing that bothers me here is that, while a normalized
>> command string sounds great in theory, as soon as you want to allow
>> (for example) mapping schema A on node 1 to schema B on node 2, the
>> wheels come off: you'll have to deparse that normalized command string
>> so you can change out the schema name and then reassemble it back into
>> a command string again. So we're going to parse the user input, then
>> deparse it, hand over the results to the application code, which will
>> then parse it, modify that, and deparse it again.
>
> I have considered several ideas on this front, but most of them turn out
> to be useless or too cumbersome to use. What seems most adequate is to
> build a command string containing certain patterns, and an array of
> replacement values for such patterns; each pattern corresponds to one
> element that somebody might want modified in the command. As a trivial
> example, a command such as
>
> CREATE TABLE foo (bar INTEGER);
>
> would return a string like
> CREATE TABLE ${table_schema}.${table_name} (bar INTEGER);
>
> and the replacement array would be
> {table_schema => "public", table_name => "foo"}
>
> If we additionally provide a function to expand the replacements in the
> string, we would have the base funcionality of a normalized command
> string. If somebody wants to move the table to some other schema, they
> can simply modify the array to suit their taste, and again expand using
> the provided function; this doesn't require parsing SQL. It's likely
> that there are lots of fine details that need exploring before this is a
> fully workable idea -- I have just started work on it, so please bear
> with me.
>
> I think this is basically what you call "a JSON blob".
I think this direction has some potential. I'm not sure it's right in
detail. The exact scheme you propose above won't work if you want to
leave out the schema name altogether, and more generally it's not
going to help very much with anything other than substituting in
identifiers. What if you want to add a column called satellite_id to
every table that gets created, for example? What if you want to make
the tables UNLOGGED? I don't see how that kind of things is going to
work at all cleanly.
What I can imagine that might work is that you get a JSON blob for a
create table statement and then you have a function
make_a_create_table_statement(json) that will turn it back into SQL.
You can pass a modified blob (adding, removing, or changing the
schema_name or any other element) instead and it will do its thing.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Dilger | 2014-01-06 18:00:15 | Re: In-core regression tests for replication, cascading, archiving, PITR, etc. Michael Paquier |
Previous Message | Robert Haas | 2014-01-06 17:40:25 | Re: ERROR: missing chunk number 0 for toast value |