From: | "Gordon A(dot) Runkle" <gar(at)no-spam-integrated-dynamics(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Migrate from MS SQL 6.5 to postgres?? |
Date: | 2001-03-01 06:18:29 |
Message-ID: | 97kpha$260r$1@news.tht.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In article <OFB13C2AF2(dot)443BAB84-ON80256A00(dot)003E8B1C(at)cbis(dot)com>, "Unknown"
<martin(dot)chantler(at)convergys(dot)com> wrote:
> I have an idea that might help I found ODBC to be very slow for
> importing data So I wrote a program in C that reads in dump files of SQL
> text on the Linux server itself E.G. first line is a create table, next
> lines are all the insert's This is very fast, 80mb of data in about 15
> minutes Only problem is the text files need to be formatted a bit
> specially If you can write a program in say VB to create the text files
> (one per table) it could work. If you are interested I could forward my
> C program and Foxpro prg that creates the text files that you could
> convert to VB
Why make it so difficult? SQL Server provides a perfectly
usable bulk copy utility (bcp.exe), which will haul the data
out ready-to-go.
H:\tmp> bcp dbname..tabname out filename.del -c -t "|" -r "\n" \
-S server -U user -P password
This will pull the data out, with '|' as the field delimiter
and a newline as a record separator.
Now you can COPY the data in using '|' as the delimiter.
If you have BLOB data types, those tables will have to
be handled in another way, of course.
Gordon.
--
It doesn't get any easier, you just go faster.
-- Greg LeMond
From | Date | Subject | |
---|---|---|---|
Next Message | Aristide Aragon | 2001-03-01 06:28:38 | Re: Re: Help with pq++ |
Previous Message | Aristide Aragon | 2001-03-01 06:15:16 | Re: Re: Help with pq++ |