Re: Multiple indexes, huge table

From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Aram Fingal <fingal(at)multifactorial(dot)com>
Cc: Postgres-General General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Multiple indexes, huge table
Date: 2012-09-07 15:15:37
Message-ID: CABRT9RCV53e8av58S-zzoZbDcqXuuG_k6AJ843-W1udYFxC9MQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Sep 7, 2012 at 12:22 AM, Aram Fingal <fingal(at)multifactorial(dot)com> wrote:
> Should I write a script which drops all the indexes, copies the data and then recreates the indexes or is there a better way to do this?

There's a pg_bulkload extension which does much faster incremental
index updates for large bulk data imports, so you get best of both
worlds: http://pgbulkload.projects.postgresql.org/

Beware though, that this is an external addon and is not as well
tested as core PostgreSQL. I have not tried it myself.

Regards,
Marti

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Merlin Moncure 2012-09-07 15:41:26 Re: Moving several databases into one database with several schemas
Previous Message Merlin Moncure 2012-09-07 14:58:30 Re: Multiple indexes, huge table