From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Dimitri Fontaine <dimitri(at)2ndquadrant(dot)fr> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Steve Singer <ssinger(at)ca(dot)afilias(dot)info>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Event Triggers: adding information |
Date: | 2013-01-27 19:08:47 |
Message-ID: | CA+TgmoZYxSEm2Pzpif-KGGBpFHAxODqSpJL+zT-ic_QgAH-8kA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jan 27, 2013 at 12:57 PM, Dimitri Fontaine
<dimitri(at)2ndquadrant(dot)fr> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>>> The current patch implementation is to fill in the object id, name and
>>> schema with NULL when we have something else than a single object as the
>>> target. I did that when I realized we have a precedent with statement
>>> triggers and that we would maybe share the implementation of the "record
>>> set variable" facility for PLs here.
>>
>> But I don't see how *that's* going to be any good for logical
>> replication either. If the raw query string isn't useful, getting the
>> OID in some cases but not all surely can't be any better.
>
> Well I would think that "we don't support droping several objects at
> once yet in 9.3" is an acceptable answer.
On that point, I disagree.
>>> So we have two proposals here:
>>>
>>> - Have the cascading drop calls back to process utility with a new
>>> context value of PROCESS_UTILITY_CASCADE and its parse node, wherein
>>> you only stuff the ObjectAdress, and provide event triggers support
>>> for this new cascade command;
>>>
>>> - Implement a new event called "sql_drop" where you provide the same
>>> amount of information than in a ddl_command event (same lookups,
>>> etc), but without any parsetree nor I suppose the command string
>>> that the user would have to type to drop just that object.
>>>
>>> You objected to the first on modularity violation grounds, and on the
>>> second on development cycle timing grounds. And now you're saying that
>>> because we don't have a practical solution, I'm not sure, apparently
>>> it's dead, but what is?
>>>
>>> Please help me decipher your process of thoughs and conclusions.
>>
>> OK, so I object to the first one on modularity grounds (as did Tom).
>> I don't object to the second one at all except that, AFAIK, nobody's
>> written the code yet. Maybe I'm misunderstanding something.
>
> I mean object as in "not in 9.3, too late", which you seem to me
> confirming here. So what do we want to provide in 9.3 exactly?
>
> Note that I have enough time allocated on finishing that patch that I
> surely can come up with an implementation of whatever we finally decide
> within this commit fest, if the implementation is straightforward
> enough. As I've been playing in and out around those topics for 2 years
> now, plenty of things are a SMOP now: the problem is reaching an
> agreement about how to implement things.
I don't really know how to respond to this. The CommitFest time is
supposed to be when you stop working on your own stuff and work on
reviewing other people's stuff, and yet, both this year and last year,
you seem to have no compunctions whatever about continuing to code
right through the CommitFest, which to me misses the point. The
CommitFest, and especially the last one, is supposed to be the time to
commit what is committable now, not to go write new code. And it is
not as if I didn't make these same design points last year around this
time.
On the other hand, if it's really true that you can bang this out in a
couple of days and come up with something committable, will I commit
it? Yeah, probably. But I think that is likely to take several
months of work, not several days, and the longer we go on with this
patch the less time there is for me to review other things that might
be a whole lot more ready to go than code that isn't written yet.
>> Actually, I think we could probably use a similar set of
>> infrastructure *either* to implement sql_drop OR to provide the
>> information to ddl_command_end. What I'm thinking about is that we
>> might have something sort of like after-trigger queue that we use for
>> ordinary triggers. Instead of calling a trigger as the drops happen,
>> we queue up a bunch of events that say "xyz got dropped". And then,
>> we can either fire something like ddl_command_end (and pass in the
>> whole list as an array or a table) or we can call something like
>> sql_drop (n times, passing details one one object per invocation). In
>> either approach, the triggers fire *after* the drops rather than in
>> the middle of them - it's just a question of whether you want one call
>> for all the drops or one call for each drop.
>
> So, do you want to see a patch implementing that to be able to provide
> information about DROP commands targeting more than one object and same
> information in other cases (object id, name, schema, operation name,
> object type)? That could be the next patch of the series, by Thursday.
I think this is approximately correct as a next logical step, but are
we really clear on what the design for that looks like? I thought you
were saying upthread that you wanted to make this information
available via some kind of pseudo-table for 9.4. In any event, adding
a new event type (sql_drop) is a separate patch, and adding
information to the existing types of event triggers is a separate
patch, so you should try to do one of those two, not both.
>> Well, the point is that if you have a function that maps a parse tree
>> onto an object name, any API or ABI changes can be reflected in an
>> updated definition for that function. So suppose I have the command
>> "CREATE TABLE public.foo (a int)". And we have a call
>> pg_target_object_namespace(), which will return "public" given the
>> parse tree for the foregoing command. And we have a call
>> pg_target_object_name(), which will return "foo". We can whack around
>> the underlying parse tree representation all we want and still not
>> break anything - because any imaginable parse tree representation will
>> allow the object name and object namespace to be extracted. Were that
>> not possible it could scarcely be called a parse tree any longer.
>
> In my view, we have no DDL operation where we don't want to be able to
> get at the Object ID, name and schema of the operation. So while more
> complex cases might well require a full function based API, I think a
> set of magic variables TG_* for basic information is going to make life
> easier for everybody.
>
> Then, more complex cases are left alone in 9.3 and we have whole
> development cycle to address them, with some data type and a set of
> functions maybe.
Doing some of it one way and some of it the other way seems kind of
grotty, though.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2013-01-27 19:17:24 | Re: autovacuum not prioritising for-wraparound tables |
Previous Message | Tom Lane | 2013-01-27 18:57:07 | Re: enhanced error fields |