From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | "Jason L(dot) Buberel" <jason(at)buberel(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Alternative to drop index, load data, recreate index? |
Date: | 2007-09-11 00:27:20 |
Message-ID: | 1189470440.28581.21.camel@dogma.ljc.laika.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 2007-09-10 at 17:06 -0700, Jason L. Buberel wrote:
> When loading very large data exports (> 1 million records) I have
> found it necessary to use the following sequence to achieve even
> reasonable import performance:
>
> 1. Drop all indices on the recipient table
> 2. Use "copy recipient_table from '/tmp/input.file';"
> 3. Recreate all indices on the recipient table
>
> However, I now have tables so large that even the 'recreate all
> indices' step is taking too long (15-20 minutes on 8.2.4).
>
> I am considering moving to date-based partitioned tables (each table =
> one month-year of data, for example). Before I go that far - is there
> any other tricks I can or should be using to speed up my bulk data
> loading?
If you create the indexes with CONCURRENTLY, then you can write to the
tables while the indexes are being created. That might help reduce your
downtime window.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2007-09-11 00:33:18 | Re: Alternative to drop index, load data, recreate index? |
Previous Message | Jason L. Buberel | 2007-09-11 00:06:35 | Alternative to drop index, load data, recreate index? |