From: | "Roman Fail" <rfail(at)posportal(dot)com> |
---|---|
To: | <chris(at)upnix(dot)com> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: MSSQL -> PostgreSQL |
Date: | 2003-05-17 01:22:12 |
Message-ID: | 9B1C77393DED0D4B9DAA1AA1742942DA3BCBA9@pos_pdc.posportal.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I converted a 10 gigabyte production database from MSSQL to Postgres a few months back. I tried all the various conversion methods and here are my impressions:
*MS Data Transformation Services - converted all the data correctly, but it was very slow and always seemed to choke after about 400,000 records. Probably when it ran out of 2GB+ memory.
* MS bcp & psql copy - required quite a bit of hand editing to handle IDENTITY/serial columns and some other minor issues. Too painful if there are a lot of tables.
*pgAdmin2 Migration Wizard - awesome. No memory problems. Not only did if figure out converting IDENTIY columns, but it offered the option to fold all table and field names to lower case, solving the double quote problem. I used this tool to convert all my tables save one, which contained VARBINARY datatypes (the wizard just ignores binary fields). To convert these I had to use bcp with a special SQL Server UDF that would convert hexadecimal to octal in an escape sequence (which is the only way the psql 'copy' command can read in binary data). I can send you more detailed information and the UDF if you are interested.
Once I got the data into Postgres, my query times on the big tables were horrible compared to MSSQL, and I thought I would have to switch back. After several days discussion on the PostgreSQL Performance mailing list (thanks guys!!!), I figured out that there were JOIN condition data type mismatches that prevented my indexes from being used. This problem is not very intuitive and easily missed by someone with a MSSQL background. That said, I recommend making all your migrated 'integer' datatypes settle on either 'int4' or 'int8'....because if you mix them you'll have the same problems I did!
One caveat of the whole migration, which probably took the most time of all....I had about 20 stored procedures and it was a pain to convert them from Transact-SQL to PL/pgSQL. Once I got the hang of it it wasn't too bad, but it's not as intuitive as Transact-SQL IMHO.
Once I fixed that problem I was flying high - Postgres is significantly faster than MSSQL in my experience, and VERY stable.
Roman Fail
POS Portal, Inc.
-----Original Message-----
From: Ian Harding [mailto:ianh(at)tpchd(dot)org]
Sent: Fri 5/16/2003 12:36 PM
To: chris(at)upnix(dot)com
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: MSSQL -> PostgreSQL
MSSQL Server and PostgreSQL are both very SQL standard compliant. If you are only talking about tables and data, this is a relatively easy project regardless of the size of the tables. If you have views, stored procedures, triggers, etc, you may be in for some work, but I doubt you do since you could convert to MySQL.
The suggestions so far (PGAdmin, dump and copy) are both feasible, and there is also the MSSQL Server Data Transformation Services tool (or whatever it's called now) which can talk directly to PostgreSQL via ODBC. I have heard it doesn't know how to convert MSSQL's version of SERIAL to PostgreSQL's, but you could fix that later with ALTER TABLE ... ALTER COLUMN ... SET DEFAULT ....
Good luck! It is worth the effort.
Ian Harding
Programmer/Analyst II
Tacoma-Pierce County Health Department
iharding(at)tpchd(dot)org
(253) 798-3549
>>> Chris Cameron <chris(at)upnix(dot)com> 05/09/03 09:16AM >>>
I'm looking to convert 2 MSSQL DB's to PostgreSQL. I've searched the
archives and various websites and found a number of solutions.
The problem is, none of them work for me. One of the databases is 150
Megs, the other 3 Gigs. It isn't very feasible for me to go into a 3 gig
file and search/replace all sorts of things (which seems pretty "iffy" a
solution to me).
I've also tried converting the MSSQL tables/data to MySQL dumps (we had
a -very- good tool laying around for that), and then running a
mysql2postgresql script against it. I've tried the one in
/contrib/mysql/ and the one on pgsql.com. Both died when they ate all
the memory on the machine (2 gigs worth).
So, any suggestions for someone looking to convert a 3+ gig database?
We're willing to pay for any tool that may work, but I haven't been able
to find any.
Thanks,
Chris
--
Chris Cameron
UpNIX Internet Administrator
ardvark.upnix.net
bitbucket.upnix.net
--
http://www.upnix.com
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Cameron | 2003-05-17 01:39:58 | pgAdmin II Download |
Previous Message | Dennis Gearon | 2003-05-16 23:54:35 | Re: "ERROR: Argument of WHERE must not be a set function"? |