From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
---|---|
To: | kevin kempter <kevin(at)kevinkempterllc(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: COPY to STDOUT and pipes |
Date: | 2008-04-15 07:16:13 |
Message-ID: | 4804563D.7000302@postnewspapers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
kevin kempter wrote:
> Any thoughts on what I'm doing wrong?
I suspect that pg_dump is going to do a better job than using psql to
generate the input for the remote load. pg_dump can dump single tables
and can use COPY style data formatting.
As for why your current command isn't working ... You omitted the SQL in
`file2.sql' that you use for the restore, which makes things harder. At
a guess I'd say the "stdin" the input copy is seeing is expected to be
the text directly following the COPY ... FROM command in the input file,
rather than the psql command's stdin.
I do have one suggestion that's ugly but may work if you can't figure
out what's going wrong with the existing method and you're not happy
with using pg_dump for some reason:
You could potentially insert psql \echo commands into the first psql
command, so the command you're using to extract the data produces a
valid sequence of SQL commands that the second psql can read from stdin
(instead of using -f to read a command file). So if `file1.sql' becomes:
\echo 'COPY tablename FROM STDIN;'
copy (
select
cust_id,
cust_name,
last_update_dt
from sl_cust
)
to STDOUT
with delimiter '|'
\echo '\\\.'
then you might be able to use a command line like:
psql -f file1.sql | psql -h newhost
(note that the second psql is reading the first one's stdout as its stdin).
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Csaba Nagy | 2008-04-15 11:30:52 | Copying large object in a stored procedure |
Previous Message | Klint Gore | 2008-04-15 06:44:49 | Re: COPY to STDOUT and pipes |