From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
Cc: | Daniel Farina <drfarina(at)gmail(dot)com>, Hannu Krosing <hannu(at)krosing(dot)net>, Greg Smith <greg(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Daniel Farina <dfarina(at)truviso(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION |
Date: | 2009-11-25 08:37:10 |
Message-ID: | 1259138230.19289.253.camel@jdavis |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, 2009-11-25 at 09:23 +0100, Pavel Stehule wrote:
> > If SRFs use a tuplestore in that situation, it sounds like that should
> > be fixed. Why do we need to provide alternate syntax involving COPY?
>
> It isn't problem of SRF function design. It allow both mode - row and
> tuplestor.
select * from generate_series(1,1000000000) limit 1;
That statement takes a long time, which indicates to me that it's
materializing the result of the SRF. And there's no insert there.
> This is problem of INSERT statement, resp. INSERT INTO
> SELECT implementation.
If "tmp" is a new table, and "zero" is a table with a million zeros in
it, then:
insert into tmp select 1/i from zero;
fails instantly. That tells me that it's not materializing the result of
the select; rather, it's feeding the rows in one at a time.
Can show me in more detail what you mean? I'm having difficulty
understanding your short replies.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | KaiGai Kohei | 2009-11-25 09:55:32 | Re: SE-PgSQL patch review |
Previous Message | Itagaki Takahiro | 2009-11-25 08:34:32 | Re: SE-PgSQL patch review |