From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Daniel Farina <drfarina(at)gmail(dot)com>, Hannu Krosing <hannu(at)krosing(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, Daniel Farina <dfarina(at)truviso(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION |
Date: | 2009-11-30 02:23:39 |
Message-ID: | 1259547819.3355.31.camel@jdavis |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 2009-11-27 at 20:28 -0500, Greg Smith wrote:
> In the context of the read case, I'm not as sure it's so black and
> white. While the current situation does map better to a function that
> produces a stream of bytes, that's not necessarily the optimal approach
> for all situations. It's easy to imagine a function intended for
> accelerating bulk loading that is internally going to produce a stream
> of already processed records.
The binary COPY mode is one of the closest things I can think of to
"already-processed records". Is binary COPY slow? If so, the only thing
faster would have to be machine-specific, I think.
> I think there's a very valid use-case for both approaches.
...
> COPY target FROM FUNCTION foo() WITH RECORDS;
In what format would the records be?
Also, this still doesn't really answer why INSERT ... SELECT isn't good
enough. If the records really are in their internal format, then
INSERT ... SELECT seems like the way to go.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Davis | 2009-11-30 02:35:45 | Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION |
Previous Message | Fujii Masao | 2009-11-30 02:21:59 | Re: Application name patch - v4 |