From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Tino Wildenhain <tino(at)wildenhain(dot)de> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Mark Woodward <pgsql(at)mohawksoft(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: COPY (query) TO file |
Date: | 2006-06-02 21:46:41 |
Message-ID: | 87ejy7dyda.fsf@stark.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tino Wildenhain <tino(at)wildenhain(dot)de> writes:
> Tom Lane wrote:
> > Tino Wildenhain <tino(at)wildenhain(dot)de> writes:
> >> Ok, but why not just implement this into pg_dump or psql?
> >> Why bother the backend with that functionality?
> >
> > You're not seriously suggesting we reimplement evaluation of WHERE clauses
> > on the client side, are you?
No, he's suggesting the client implement COPY formatting after fetching a
regular result set.
Of course this runs into the same problem other clients have dealing with
large result sets. libpq doesn't want to let the client deal with partial
results so you have to buffer up the entire result set in memory.
I was also vaguely pondering whether all the DDL commands could be generalized
to receive or send COPY formatted data for repeated execution. It would be
neat to be able to prepare an UPDATE with placeholders and stream data in COPY
format as parameters to the UPDATE to execute it thousands or millions of
times without any protocol overhead or network pipeline stalls.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2006-06-02 22:15:43 | Re: More thoughts about planner's cost estimates |
Previous Message | Greg Stark | 2006-06-02 21:41:07 | Re: More thoughts about planner's cost estimates |