From: | Tino Wildenhain <tino(at)wildenhain(dot)de> |
---|---|
To: | Mark Woodward <pgsql(at)mohawksoft(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: COPY (query) TO file |
Date: | 2006-06-02 21:29:48 |
Message-ID: | 4480ADCC.307@wildenhain.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Mark Woodward wrote:
...
>>> pg_dump -t mytable | psql -h target -c "COPY mytable FROM STDIN"
>>>
>>> With a more selective copy, you can use pretty much this mechanism to
>>> limit a copy to a sumset of the records in a table.
>> Ok, but why not just implement this into pg_dump or psql?
>> Why bother the backend with that functionality?
>
> Because "COPY" runs on the back-end, not the front end, and the front end
> may not even be in the same city as the backend. When you issue a "COPY"
> the file it reads or writes local to the backend. True, the examples I
> gave may not show how that is important, but consider this:
We were talking about COPY to stdout :-) Copy to file is another
issue :-) Copy to (server fs) file has so many limitations I dont see
wide use for it. (Of course there are usecases)
> psql -h remote masterdb -c "COPY (select * from mytable where ID <
> xxlastxx) as mytable TO '/replicate_backup/mytable-060602.pgc'"
>
> This runs completely in the background and can serve as a running backup.
And you are sure it would be much faster then a server local running
psql just dumping the result of a query?
(And you could more easy avoid raceconditions in contrast to several
remote clients trying to trigger your above backup )
But what do I know... I was just asking :-)
Regards
Tino
From | Date | Subject | |
---|---|---|---|
Next Message | David Fetter | 2006-06-02 21:34:48 | Re: COPY (query) TO file |
Previous Message | Jim Nasby | 2006-06-02 21:26:06 | bgwriter statistics |