From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Guy Rouillier" <guyr(at)masergy(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: hundreds of millions row dBs |
Date: | 2005-01-04 05:32:22 |
Message-ID: | 13956.1104816742@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Guy Rouillier" <guyr(at)masergy(dot)com> writes:
> Greer, Doug wrote:
>> I am interested in using Postgresql for a dB of hundreds of
>> millions of rows in several tables. The COPY command seems to be way
>> too slow. Is there any bulk import program similar to Oracle's SQL
>> loader for Postgresql? Sincerely,
> We're getting about 64 million rows inserted in about 1.5 hrs into a
> table with a multiple-column primary key - that's the only index.
> That's seems pretty good to me - SQL Loader takes about 4 hrs to do the
> same job.
If you're talking about loading into an initially empty database, it's
worth a try to load into bare tables and then create indexes and add
foreign key constraints. Index build and FK checking are both
significantly faster as "bulk" operations than "incremental". Don't
forget to pump up sort_mem as much as you can stand in the backend doing
such chores, too.
I have heard of people who would actually drop and recreate indexes
and/or FKs when adding a lot of data to an existing table.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Sim Zacks | 2005-01-04 07:50:18 | varchar vs text |
Previous Message | Guy Rouillier | 2005-01-04 04:58:30 | Re: hundreds of millions row dBs |