From: | Andy Colson <andy(at)squeakycode(dot)net> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Mysql to Postgresql |
Date: | 2011-02-23 15:12:02 |
Message-ID: | 4D6523C2.3070103@squeakycode.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2/22/2011 9:33 PM, John R Pierce wrote:
> On 02/22/11 1:25 AM, Jaime Crespo Rincón wrote:
>> 2011/2/22 Adarsh Sharma<adarsh(dot)sharma(at)orkash(dot)com>:
>>> Dear all,
>>>
>>> Today I need to back up a mysql database and restore in Postgresql
>>> database
>>> but I don't know how to achieve this accurately.
>> Have a look at: "mysqldump --compatible=postgresql" command:
>> <http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compatible>
>>
>>
>> Anyway, most of the times you will need a more manual migration, with
>> human intervention (custom scripts) and migrating the data through
>> something like CSV (SELECT... INTO OUTFILE).
>
>
> if your tables aren't too huge, one method is via a perl script that
> uses DBI to connect to both mysql and pgsql, and fetches a table from
> one and loads it into the other. the catch-22 is, its fairly hard to do
> this efficiently if the tables won't fit in memory
>
>
> there are also various "ETL" (Extract, Translate, Load) tools that do
> this sort of thing with varying levels of performance and automation,
> some free, some commercial.
>
>
>
>
>
>
Actually, in mysql, you can set:
$db->{"mysql_use_result"} = 1;
Which causes mysql to not load the entire result set into memory, it
might be a bit slower because it has to make more round trips to the
server, but it uses very little ram. (I use this in my own perl mysql
to pg script)
-Andy
From | Date | Subject | |
---|---|---|---|
Next Message | Reto Wittwer | 2011-02-23 16:25:59 | Autovacuum Error Message out of memory |
Previous Message | Gauthier, Dave | 2011-02-23 14:58:59 | Re: regexp match in plpgsql |