From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com> |
Cc: | Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp>, Shigeru Hanada <shigeru(dot)hanada(at)gmail(dot)com>, Jim Mlodgenski <jimmy76(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PgHacker <pgsql-hackers(at)postgresql(dot)org>, Peter Eisentraut <peter_e(at)gmx(dot)net> |
Subject: | Re: Custom Scan APIs (Re: Custom Plan node) |
Date: | 2014-02-26 07:30:03 |
Message-ID: | 20140226073003.GD2921@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Kouhei Kaigai (kaigai(at)ak(dot)jp(dot)nec(dot)com) wrote:
> This regular one means usual tables. Even though custom implementation
> may reference self-managed in-memory cache instead of raw heap, the table
> pointed in user's query shall be a usual table.
> In the past, Hanada-san had proposed an enhancement of FDW to support
> remote-join but eventually rejected.
I'm not aware of the specifics around that proposal but I don't believe
we, as a community, have decided to reject the idea in general.
> The changes to backend are just for convenient. We may be able to implement
> functions to translate Bitmapset from/to cstring form in postgres_fdw,
> does it make sense to maintain individually?
Perhaps not.
> I thought these functions were useful to have in the backend commonly, but
> is not a fundamental functionality lacks of the custom-scan interface.
Then perhaps they should be exposed more directly? I can understand
generally useful functionality being exposed in a way that anyone can
use it, but we need to avoid interfaces which can't be stable due to
normal / ongoing changes to the backend code.
> I can also understand the usefulness of join or aggregation into the remote
> side in case of foreign table reference. In similar way, it is also useful
> if we can push these CPU intensive operations into co-processors on regular
> table references.
That's fine, if we can get data to and from those co-processors
efficiently enough that it's worth doing so. If moving the data to the
GPU's memory will take longer than running the actual aggregation, then
it doesn't make any sense for regular tables because then we'd have to
cache the data in the GPU's memory in some way across multiple queries,
which isn't something we're set up to do.
> As I mentioned above, the backend changes by the part-2/-3 patches are just
> minor stuff, and I thought it should not be implemented by contrib module
> locally.
Fine- then propose them as generally useful additions, not as patches
which are supposed to just be for contrib modules using an already
defined interface. If you can make a case for that then perhaps this is
more practical.
> Regarding to the condition where we can run remote aggregation, you are
> right. As current postgres_fdw push-down qualifiers into remote side,
> we need to ensure remote aggregate definition is identical with local one.
Of course.
> No. What I want to implement is, read the regular table and transfer the
> contents into GPU's local memory for calculation, then receives its
> calculation result. The in-memory cache (also I'm working on) is supplemental
> stuff because disk access is much slower and row-oriented data structure is
> not suitable for SIMD style instructions.
Is that actually performant? Is it actually faster than processing the
data directly? The discussions that I've had with folks have cast a
great deal of doubt in my mind about just how well that kind of quick
turn-around to the GPU's memory actually works.
> > This really strikes me as the wrong approach for an FDW join-pushdown API,
> > which should be geared around giving the remote side an opportunity on a
> > case-by-case basis to cost out joins using whatever methods it has available
> > to implement them. I've outlined above the reasons I don't agree with just
> > making the entire planner/optimizer pluggable.
> >
> I'm also inclined to have arguments that will provide enough information
> for extensions to determine the best path for them.
For join push-down, I proposed above that we have an interface to the
FDW which allows us to ask it how much each join of the tables which are
on a given FDW's server would cost if the FDW did it vs. pulling it back
and doing it locally. We could also pass all of the relations to the
FDW with the various join-quals and try to get an answer to everything,
but I'm afraid that'd simply end up duplicating the logic of the
optimizer into every FDW, which would be counter-productive.
Admittedly, getting the costing right isn't easy either, but it's not
clear to me how it'd make sense for the local server to be doing costing
for remote servers.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2014-02-26 07:32:45 | Re: ALTER TABLE lock strength reduction patch is unsafe |
Previous Message | Christophe Pettus | 2014-02-26 07:17:16 | Re: jsonb and nested hstore |