Re: Using Expanded Objects other than Arrays from plpgsql

From: Michel Pelletier <pelletier(dot)michel(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Using Expanded Objects other than Arrays from plpgsql
Date: 2024-10-23 04:15:30
Message-ID: CACxu=v+t5xDh0K=tOVmFrumvAH60izWhB16a9k7f2A8jPxQEaw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

On Tue, Oct 22, 2024 at 12:33 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Michel Pelletier <pelletier(dot)michel(at)gmail(dot)com> writes:

> But now we also have
> expanded records, and with your use-case as an example of an extension
> trying to do similar things, I feel like we have enough examples to
> move forward.
>

Great!

As far as the hack we were discussing upthread goes, I realized that
> it should test for typlen == -1 not just !typbyval, since the
> VARATT_IS_EXTERNAL_EXPANDED_RW test requires that there be a length
> word. With that fix and some comment updates, it looks like the
> attached. I'm inclined to just go ahead and push that. It won't move
> the needle hugely far, but it will help, and it seems simple and safe.
>

I made those changes and my code works a bit faster, it looks like it takes
a couple of the top level expansions out. I'll have more data in the
morning.

To make more progress, there are basically two areas that we need
> to look at:
>
> 1. exec_assign_value() has hard-wired knowledge that it's a good idea
> to expand an array that's being assigned to a plpgsql variable, and
> likewise hard-wired knowledge that calling expand_array() is how to
> do that. The bit in plpgsql_exec_function() that we're patching
> in the attached is the same thing, but for the initial assignment of
> a function input parameter to a plpgsql variable. At the time this
> was written I was quite unsure that forcible expansion would be a net
> win, but the code is nine years old now and there's been few or no
> performance complaints. So maybe it's okay to decide that "always
> expand expandable types during assignment" is a suitable policy across
> the board, and we don't need to figure out a smarter rule. It sounds
> like that'd probably be a win for your application, which gives me
> more faith in the idea than I would've had before.

Definitely a win, as the flattened format of my objects don't have any run
time use, so there is no chance for net loss for me. I guess I'm using
this feature differently from how arrays are, which have a usable flattened
format so there is a need to weigh a trade off with expanding or not. In
my case, only the expanded version is useful, and serializing the flat
version is expensive. Formalizing something like expand_array would work
well for me, as my expand_matrix function has the identical function
signature and serves the exact same purpose.

So now

> I'm thinking that we should steal ideas from the "prosupport" API
> (see src/include/nodes/supportnodes.h) and invent a concept of a
> "type support" function that can handle an extensible set of
> different requests. The first one would be to pass back the
> address of an expansion function comparable to expand_array(),
> if the type supports being converted to an expanded object.
>

I'll look into the support code, I haven't seen that before.

> 2. Most of the performance gold is hidden in deciding when we
> can optimize operations on expanded objects that look like
>
> plpgsql_var := f(plpgsql_var, other-parameters);
>
> by passing a R/W rather than R/O expanded pointer to f() and letting
> it munge the expanded object in-place. If we fail to do that then
> f() has to construct a new expanded object for its result. It's
> probably still better off getting a R/O pointer than a flat object,
> but nonetheless we fail to avoid a lot of data copying.
>
> The difficulty here is that we do not want to change the normal naive
> semantics of such an assignment, in particular "if f() throws an error
> then the value of plpgsql_var has not been modified". This means that
> we can only optimize when the R/W parameter is to be passed to the
> top-level function of the expression (else, some function called later
> could throw an error and ruin things). Furthermore, we need a
> guarantee from f() that it will not throw an error after modifying
> the value.

> As things stand, plpgsql has hard-wired knowledge that
> array_subscript_handler(), array_append(), and array_prepend()
> are safe in that way, but it knows nothing about anything else.
> So one route to making things better seems fairly clear: invent a new
> prosupport request that asks whether the function is prepared to make
> such a guarantee. I wonder though if you can guarantee any such thing
> for your functions, when you are relying on a library that's probably
> not designed with such a restriction in mind. If this won't work then
> we need a new concept.
>

This will work for the GraphBLAS API. The expanded object in my case is
really just a small box struct around a GraphBLAS "handle" which is an
opaque pointer to data which I cannot mutate, only the library can change
the object behind the handle. The API makes strong guarantees that it will
either do the operation and return a success code or not do the operation
and return an error code. It's not possible (normally) to get a corrupt or
incomplete object back.

>
> One idea I was toying with is that it doesn't matter if f()
> throws an error so long as the plpgsql function is not executing
> within an exception block: if the error propagates out of the plpgsql
> function then we no longer care about the value of the variable.
> That would very substantially weaken the requirements on how f()
> is implemented. I'm not sure that we could make this work across
> multiple levels of plpgsql functions (that is, if the value of the
> variable ultimately resides in some outer function) but right now
> that's not an issue since no plpgsql function as such would ever
> promise to be safe, thus it would never receive a R/W pointer to
> some outer function's variable.
>

The water here is pretty deep for me but I'm pretty sure I get what you're
saying, I'll need to do some more studying of the plpgsql code which I've
been spending the last couple of days familiarizing myself with more.

> > same with:
> > bfs_vector = vxm(bfs_vector, graph, 'any_secondi_int32',
> > w=>bfs_vector, accum=>'min_int32');
> > This matmul mutates bfs_vector, I shouldn't need to reassign it back but
> at
> > the moment it seems necessary otherwise the mutations are lost but this
> > costs a full flatten/expand cycle.
>
> I'm not hugely excited about that. We could imagine extending
> this concept to INOUT parameters of procedures, but it doesn't
> seem like that buys anything except a small notational savings.
> Maybe it would work to allow multiple parameters of a procedure
> to be passed as R/W, whereas we're restricted to one for the
> function-notation method. So possibly there's a gain there but
> I'm not sure how big.
>
> BTW, if I understand your example correctly then bfs_vector is
> being passed to vxm() twice.

Yes, this is not unusual in this form of linear algebra, as multiple
operations often accumulate into the same object to prevent a bunch of
copying during each iteration of a given algorithm. There is also a "mask"
parameter where another or the same object can be provided to either mask
in or out (compliment mask) values during the accumulation phase. This is
very useful for many algorithms, and good example is the Burkhardt method
of Triangle Counting (
https://journals.sagepub.com/doi/10.1177/1473871616666393) which in
GraphBLAS boils down to:

GrB_mxm (C, A, NULL, semiring, A, A, GrB_DESC_S) ;
GrB_reduce (&ntri, NULL, monoid, C, NULL) ;
ntri /= 6 ;

In this case A is three of the parameters to mxm, the left operand, right
operand, and a structural mask. This can be summed up as "A squared,
masked by A", which when reduced returns the number of triangles in the
graph (times 6).

> This brings up an interesting
> point: if we pass the first instance as R/W, and vxm() manipulates it,
> then the changes would also be visible in its other parameter "w".
> This is certainly not per normal semantics. A "safe" function would
> have to either not have any possibility that two parameters could
> refer to the same object, or be coded in a way that made it impervious
> to this issue --- in your example, it couldn't look at "w" anymore
> once it'd started modifying the first parameter. Is that an okay
> requirement, and if not what shall we do about it?
>

I *think*, if I understand you correctly, that this isn't an issue for the
GraphBLAS. My expanded objects are just boxes around an opaque handle, I
don't actually mutate anything inside the box, and I can't see past the
opaque pointer. SuiteSparse may be storing the matrix in one of many
different formats, or on a GPU, or who knows, all I have is a handle to "A"
which I pass to GraphBLAS methods which is the only way I can interact with
them. Here's the definition of that vxm function:

https://github.com/OneSparse/OneSparse/blob/main/src/matrix.c#L907

It's pretty straightforward, get the arguments, and pass them to the
GraphBLAS API, there is no mutable structure inside the expanded "box",
just the handle.

I'm using the expanded object API to solve my two key problems, flatten an
object for disk storage, expand that object (through the GraphBLAS
serialize/deserialize API) and turn it into an object handle, which is
secretly just a pointer of course to the internal details of the object,
but I can't see or change that, only SuiteSparse can.

(btw sorry about the bad parameter names, "w" is the name from the API spec
which I've been sticking to, which is the optional output object to use, if
one is not passed, I create a new one, this is similar to the numpy "out"
parameter semantics) .

I added some debug instrumentation that might show a bit more of what's
going on for me, consider this function:

CREATE OR REPLACE FUNCTION test(graph matrix)
RETURNS matrix LANGUAGE plpgsql AS
$$
DECLARE
n bigint = nrows(graph);
m bigint = ncols(graph);
BEGIN
RETURN graph;
end;
$$;

The graph passes straight through but first I call two methods to get the
number rows and columns. When I run it on a graph:

postgres=# select pg_typeof(test(graph)) from test_graphs ;
DEBUG: matrix_nvals
DEBUG: DatumGetMatrix
DEBUG: expand_matrix
DEBUG: new_matrix
DEBUG: context_callback_matrix_free
DEBUG: matrix_ncols
DEBUG: DatumGetMatrix
DEBUG: expand_matrix
DEBUG: new_matrix
DEBUG: context_callback_matrix_free
pg_typeof
-----------
matrix
(1 row)

THe matrix gets expanded twice, presumably because the object comes in
flat, and both nrows() and ncols() expands it, which ends up being two
separate handles and thus two separate objects to the GraphBLAS.

Here's another example:

CREATE OR REPLACE FUNCTION test2(graph matrix)
RETURNS bigint LANGUAGE plpgsql AS
$$
BEGIN
perform set_element(graph, 1, 1, 1);
RETURN nvals(graph);
end;
$$;
CREATE FUNCTION
postgres=# select test2(matrix('int32'));
DEBUG: new_matrix
DEBUG: matrix_get_flat_size
DEBUG: flatten_matrix
DEBUG: scalar_int32
DEBUG: new_scalar
DEBUG: matrix_set_element
DEBUG: DatumGetMatrix
DEBUG: expand_matrix
DEBUG: new_matrix
DEBUG: DatumGetScalar
DEBUG: matrix_get_flat_size
DEBUG: matrix_get_flat_size
DEBUG: flatten_matrix
DEBUG: context_callback_matrix_free
DEBUG: context_callback_scalar_free
DEBUG: matrix_nvals
DEBUG: DatumGetMatrix
DEBUG: expand_matrix
DEBUG: new_matrix
DEBUG: context_callback_matrix_free
DEBUG: context_callback_matrix_free
test2
-------
0
(1 row)

I would expect that to return 1. If I do "graph = set_element(graph, 1, 1,
1)" it works.

I hope that gives a bit more information about my use cases, in general I'm
very happy with the API, it's very algebraic, I have a lot of interesting
plans for supporting more operators and subscription syntax, but this issue
is not my top priority to see if we can resolve it. I'm sure I missed
something in your detailed plan so I'll be going over it some more this
week. Please let me know if you have any other questions about my use case
or concerns about my expectations.

Thank you!

-Michel

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rich Shepard 2024-10-23 12:49:54 Re: CURRENTE_DATE
Previous Message yudhi s 2024-10-22 21:23:59 Re: Query performance issue

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Smith 2024-10-23 04:44:03 Refactor to use common function 'get_publications_str'.
Previous Message Laurenz Albe 2024-10-23 04:11:08 Re: proposal: schema variables