From: | "Mark Woodward" <pgsql(at)mohawksoft(dot)com> |
---|---|
To: | "Tino Wildenhain" <tino(at)wildenhain(dot)de> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: COPY (query) TO file |
Date: | 2006-06-02 21:23:25 |
Message-ID: | 18710.24.91.171.78.1149283405.squirrel@mail.mohawksoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Mark Woodward wrote:
> ...
>>>>> create table as select ...; followed by a copy of that table
>>>>> if it really is faster then just the usual select & fetch?
>>>> Why "create table?"
>>> Just to simulate and time the proposal.
>>> SELECT ... already works over the network and if COPY from a
>>> select (which would basically work like yet another wire
>>> protocol) isnt significantly faster, why bother?
>>
>> Because the format of COPY is a common transmiter/receiver for
>> PostgreSQL,
>> like this:
>>
>> pg_dump -t mytable | psql -h target -c "COPY mytable FROM STDIN"
>>
>> With a more selective copy, you can use pretty much this mechanism to
>> limit a copy to a sumset of the records in a table.
>
> Ok, but why not just implement this into pg_dump or psql?
> Why bother the backend with that functionality?
Because "COPY" runs on the back-end, not the front end, and the front end
may not even be in the same city as the backend. When you issue a "COPY"
the file it reads or writes local to the backend. True, the examples I
gave may not show how that is important, but consider this:
psql -h remote masterdb -c "COPY (select * from mytable where ID <
xxlastxx) as mytable TO '/replicate_backup/mytable-060602.pgc'"
This runs completely in the background and can serve as a running backup.
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2006-06-02 21:26:06 | bgwriter statistics |
Previous Message | Tino Wildenhain | 2006-06-02 21:22:14 | Re: COPY (query) TO file |