From: | John Rouillard <rouilj(at)renesys(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Very poor performance loading 100M of sql data using copy |
Date: | 2008-04-29 15:04:32 |
Message-ID: | 20080429150432.GO6622@renesys.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Apr 29, 2008 at 05:19:59AM +0930, Shane Ambler wrote:
> John Rouillard wrote:
>
> >We can't do this as we are backfilling a couple of months of data
> >into tables with existing data.
>
> Is this a one off data loading of historic data or an ongoing thing?
Yes it's a one off bulk data load of many days of data. The daily
loads will also take 3 hour's but that is ok since we only do those
once a day so we have 21 hours of slack in the schedule 8-).
> >>>The only indexes we have to drop are the ones on the primary keys
> >>> (there is one non-primary key index in the database as well).
>
> If this amount of data importing is ongoing then one thought I would try
> is partitioning (this could be worthwhile anyway with the amount of data
> you appear to have).
> Create an inherited table for the month being imported, load the data
> into it, then add the check constraints, indexes, and modify the
> rules/triggers to handle the inserts to the parent table.
Hmm, interesting idea, worth considering if we have to do this again
(I hope not).
Thaks for the reply.
--
-- rouilj
John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111
From | Date | Subject | |
---|---|---|---|
Next Message | John Rouillard | 2008-04-29 15:16:22 | Re: Very poor performance loading 100M of sql data using copy |
Previous Message | Vivek Khera | 2008-04-29 15:00:57 | Re: Replication Syatem |