From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Amit Khandekar <amit(dot)khandekar(at)enterprisedb(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: delta relations in AFTER triggers |
Date: | 2014-08-29 21:15:01 |
Message-ID: | 1409346901.59767.YahooMailNeo@web122306.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
> On 08/28/2014 12:03 AM, Kevin Grittner wrote:
>> Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> wrote:
>>> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
>>> ?). The planner calls it whenever it sees a reference to a table, and
>>> the hook function returns back some sort of placeholder reference to the
>>> tuplestore. With variables, the hook returns a Param node, and at
>>> execution time, the executor calls the paramFetch hook to fetch the
>>> value of the param. For relations/tuplestores, I guess we'll need to
>>> invent something like a Param node, but for holding information about
>>> the relation. Like your TsrData struct, but without the pointer to the
>>> tuplestore. At execution time, in the SPI_execute call, you pass the
>>> pointer to the tuplestore in the ParamListInfo struct, like you pass
>>> parameter values.
>>>
>>> Does this make sense?
>>
>> I see your point, but SPI first has to be made aware of the
>> tuplestores and their corresponding names and TupleDesc structures.
>> Does it make sense to keep the SPI_register_tuplestore() and
>> SPI_unregister_tuplestore() functions for the client side of the
>> API, and pass things along to the parse analysis through execution
>> phases using the techniques you describe?
>
> Sorry, I didn't understand that. What do you mean by "first", and the
> "client side of the API"? I don't see any need for the
> SPI_register_tuplestore() and and SPI_unregister_tuplestore() functions
> if you use the hooks.
If we were to go with the hooks as you propose, we would still need
to take the information from TriggerData and put it somewhere else
for the hook to reference. The hooks are generalized for plpgsql,
not just for triggers, and it doesn't seem appropriate for them to
be fishing around in the TriggerData structure. And what if we add
other sources for tuplestores? The lookup during parse analysis
each time an apparent relation name is encountered must be simple
and fast.
I want named tuplestores to be easy to use from *all* PLs (for
trigger usage) as well as useful for other purposes people may want
to develop. I had to change the hashkey for plpgsql's plan
caching, but that needs to be done regardless of the API (to
prevent problems in the obscure case that someone attaches the same
trigger function to the same table for the same events more than
once with different trigger names and different transition table
names). If you ignore that, the *entire* change to use this in
plpgsql is to add these lines to plpgsql_exec_trigger():
/*
* Capture the NEW and OLD transition TABLE tuplestores (if specified for
* this trigger).
*/
if (trigdata->tg_newtable)
{
Tsr tsr = palloc(sizeof(TsrData));
tsr->name = trigdata->tg_trigger->tgnewtable;
tsr->tstate = trigdata->tg_newtable;
tsr->tupdesc = trigdata->tg_relation->rd_att;
tsr->relid = trigdata->tg_relation->rd_id;
SPI_register_tuplestore(tsr);
}
if (trigdata->tg_oldtable)
{
Tsr tsr = palloc(sizeof(TsrData));
tsr->name = trigdata->tg_trigger->tgoldtable;
tsr->tstate = trigdata->tg_oldtable;
tsr->tupdesc = trigdata->tg_relation->rd_att;
tsr->relid = trigdata->tg_relation->rd_id;
SPI_register_tuplestore(tsr);
}
With the new SPI functions, the code to implement this in each
other PL should be about the same (possibly identical), and areas
using SPI only need similar code to make tuplestores visible to the
planner and usable in the executor if someone has another use for
this. You just do the above once you have run SPI_connect() and
before preparing or executing any query that references the named
tuplestore. It remains available on that SPI connection until
SPI_finish() is called or you explicitly unregister it (by name).
If we use the hooks, I think it will be several times as much code,
more invasive, and probably more fragile. More importantly, these
hooks are not used by the other PLs included with core, and are not
used in any of the other core code, anywhere. They all use SPI, so
they can do the above very easily, but adding infrastructure for
them to use the hooks would be a lot more work, and I'm not seeing
a corresponding benefit.
I think there is some room to change the API as used by the parser,
planner, and executor so that no changes are needed there if we add
some other registration mechanism besides SPI, but I think having a
registry for tuplestores that sort of takes the place of the
catalogs and related caches (but is far simpler, being process
local) is a better model than what you propose.
In summary, thinking of the definition of a named tuplestore as a
variable is the wrong parallel; it is more like a lightweight
relation, and the comparison should be to that, not to function
parameters or local variables.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Gianni Ciolli | 2014-08-29 21:59:02 | Re: Why data of timestamptz does not store value of timezone passed to it? |
Previous Message | Peter Eisentraut | 2014-08-29 21:10:08 | Re: Re: Why data of timestamptz does not store value of timezone passed to it? |