From: | Gordan Bobic <gordan(at)bobich(dot)net> |
---|---|
To: | Doug McNaught <doug(at)wireboard(dot)com> |
Cc: | <joe(at)jwebmedia(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Any Good Way To Do Sync DB's? |
Date: | 2001-10-13 04:29:37 |
Message-ID: | Pine.LNX.4.33.0110130527210.28869-100000@sentinel.bobich.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12 Oct 2001, Doug McNaught wrote:
> Joseph Koenig <joe(at)jwebmedia(dot)com> writes:
>
> > I have a project where a client has products stored in a large Progress
> > DB on an NT server. The web server is a FreeBSD box though, and the
> > client wants to try to avoid the $5,500 license for the Unlimited
> > Connections via OpenLink software and would like to take advantage of
> > the 'free' non-expiring 2 connection (concurrent) license. This wouldn't
> > be a huge problem, but the DB can easily reach 1 million records. Is
> > there any good way to pull this data out of Progess and get it into
> > Postgres? This is way too large of a db to do a "SELECT * FROM table"
> > and do an insert for each row. Any brilliant ideas? Thanks,
>
> Probably the best thing to do is to export the data from Progress in a
> format that the PostgreSQL COPY command can read. See the docs for
> details.
I'm going to have to rant now. The "dump" and "restore" which use the COPY
method are actually totally useless for large databases. The reason for
this is simple. Copying a 4 GB table with 40M rows requires over 40GB of
temporary scratch space to copy, due to the WAL temp files. That sounds
totally silly. Why doesn't pg_dump insert commits every 1000 rows or so???
Cheers.
Gordan
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-10-13 04:37:24 | Re: Contents of greatbridge.com? |
Previous Message | Andrej Falout | 2001-10-13 04:26:49 | Re: A tool for making console/text data enrty/display screens in C |