From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com> |
Cc: | Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp>, Shigeru Hanada <shigeru(dot)hanada(at)gmail(dot)com>, Jim Mlodgenski <jimmy76(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PgHacker <pgsql-hackers(at)postgresql(dot)org>, Peter Eisentraut <peter_e(at)gmx(dot)net> |
Subject: | Re: Custom Scan APIs (Re: Custom Plan node) |
Date: | 2014-02-26 08:02:54 |
Message-ID: | 20140226080253.GF2921@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Kouhei Kaigai (kaigai(at)ak(dot)jp(dot)nec(dot)com) wrote:
> > Just to come back to this- the other two "contrib module" patches, at least
> > as I read over their initial submission, were *also* patching portions of
> > backend code which it was apparently discovered that they needed. That's
> > a good bit of my complaint regarding this approach.
> >
> ?? Sorry, are you still negative on the portion of backend patched
> by the part-2 and part-3 portion??
Pretty sure that I sent that prior to your last email, or at least
before I was to the end of it.
> > If you're looking to just use GPU acceleration for improving individual
> > queries, I would think that Robert's work around backend workers would be
> > a more appropriate way to go, with the ability to move a working set of
> > data from shared buffers and on-disk representation of a relation over to
> > the GPU's memory, perform the operation, and then copy the results back.
> >
> The approach is similar to the Robert's work except for GPU adoption,
> instead of multicore CPUs. So, I tried to review his work to apply
> the facilities on my extension also.
Good, I'd be very curious to hear how that might solve the issue for
you, instead of using hte CustomScan approach..
> > "regular" PG tables, just to point out one issue, can be locked on a
> > row-by-row basis, and we know exactly where in shared buffers to go hunt
> > down the rows. How is that going to work here, if this is both a "regular"
> > table and stored off in a GPU's memory across subsequent queries or even
> > transactions?
> >
> It shall be handled "case-by-case" basis, I think. If row-level lock is
> required over the table scan, custom-scan node shall return a tuple being
> located on the shared buffer, instead of the cached tuples. Of course,
> it is an option for custom-scan node to calculate qualifiers by GPU with
> cached data and returns tuples identified by ctid of the cached tuples.
> Anyway, it is not a significant problem.
I think you're being a bit too hand-wavey here, but if we're talking
about pre-scanning the data using PG before sending it to the GPU and
then only performing a single statement on the GPU, we should be able to
deal with it. I'm worried about your ideas to try and cache things on
the GPU though, if you're not prepared to deal with locks happening in
shared memory on the rows you've got cached out on the GPU, or hint
bits, or the visibility map being updated, etc...
> OK, I'll move the portion that will be needed commonly for other FDWs into
> the backend code.
Alright- but realize that there may be objections there on the basis
that the code/structures which you're exposing aren't, and will not be,
stable. I'll have to go back and look at them myself, certainly, and
their history.
> Yes. According to the previous discussion around postgres_fdw getting
> merged, all we can trust on the remote side are built-in data types,
> functions, operators or other stuffs only.
Well, we're going to need to expand that a bit for aggregates, I'm
afraid, but we should be able to define the API for those aggregates
very tightly based on what PG does today and require that any FDW
purporting to provides those aggregates do it the way PG does. Note
that this doesn't solve all the problems- we've got other issues with
regard to pushing aggregates down into FDWs that need to be solved.
> The custom-scan node is intended to perform on regular relations, not
> only foreign tables. It means a special feature (like GPU acceleration)
> can perform transparently for most of existing applications. Usually,
> it defines regular tables for their work on installation, not foreign
> tables. It is the biggest concern for me.
The line between a foreign table and a local one is becoming blurred
already, but still, if this is the goal then I really think the
background worker is where you should be focused, not on this Custom
Scan API. Consider that, once we've got proper background workers,
we're going to need new nodes which operate in parallel (or some other
rejiggering of the nodes- I don't pretend to know exactly what Robert is
thinking here, and I've apparently forgotten it if he's posted it
somewhere) and those interfaces may drive changes which would impact the
Custom Scan API- or worse, make us deprecate or regret having added it
because now we'll need to break backwards compatibility to add in the
parallel node capability to satisfy the more general non-GPU case.
> I might have miswording. Anyway, I want plan nodes that enable extensions
> to define its behavior, even though it's similar to ForeignScan, but allows
> to perform on regular relations. Also, not only custom-scan and foreign-scan,
> any plan nodes work according to the interface to co-work with other nodes,
> it is not strange that both of interfaces are similar.
It sounds a lot like you're trying to define, external to PG, what
Robert is already trying to get going *internal* to PG, and I really
don't want to end up in a situation where we've got a solution for the
uncommon case but aren't able to address the common case due to risk of
breaking backwards compatibility...
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2014-02-26 08:03:40 | Re: Custom Scan APIs (Re: Custom Plan node) |
Previous Message | Shigeru Hanada | 2014-02-26 08:01:45 | Re: Custom Scan APIs (Re: Custom Plan node) |