From: | Sean Davis <sdavis2(at)mail(dot)nih(dot)gov> |
---|---|
To: | rasdj(at)frontiernet(dot)net |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Multiple COPYs |
Date: | 2005-06-17 14:32:52 |
Message-ID: | 7afc66d235e2e295a18d4ed5de5317ff@mail.nih.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Jun 16, 2005, at 12:32 PM, rasdj(at)frontiernet(dot)net wrote:
> Hello,
>
> Having a great time with PG - ported an erp from oracle and db2. First
> I tried MySql but choked somewhere in the 900 table region. I have a
> python script to translate the syntax and it loads about 2000 tables.
>
> Now I want to COPY my dumps - I have 1 data dump for each table. Any
> tips on what to use so that I can read the file name into a variable
> and pass it as the file name in the COPY command and have one script
> load all my tables?
Why not use Python or a simple shell script to generate a file like:
\copy table1 from 'table1.txt'
\copy table2 from 'table2.txt'
....
And then do psql -f filename
Sean
From | Date | Subject | |
---|---|---|---|
Next Message | Rod Taylor | 2005-06-17 14:35:48 | Re: Autovacuum in the backend |
Previous Message | Douglas McNaught | 2005-06-17 14:30:09 | Re: Multiple COPYs |