From: | Steve Atkins <steve(at)blighty(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Fast, indexed bulk inserts |
Date: | 2002-03-04 23:18:00 |
Message-ID: | 20020304151800.A41919@blighty.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Yet another "How Do I Make It Go Faster?" question:
I have a single table, with a non-unique b-tree index on a single text
column.
It's large - possibly hundreds of millions of rows.
Every so often I need to insert a batch of rows, perhaps 1,000 -
10,000 at once. I have multiple writers attempting to do this,
possibly simultaneously.
As you might guess, this isn't blindingly fast.
At the moment I'm starting a transaction, locking the table 'share
mode', running a series of inserts and ending the transaction.
Any suggestions on how to speed this up? I'm prepared to sacrifice
pretty much anything, apart from the index itself, to speed up the
insertions (including short delays in index consistency - if I were
doing the indexing manually I might keep a secondary index for all
inserted data and every few minutes merging it into the primary
index).
I'm trying to do something similar to full text search - any
alternative suggestions would be welcome too.
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-03-04 23:22:06 | Re: Changes to NOW() in 7.2? |
Previous Message | Tom Lane | 2002-03-04 23:10:55 | Re: lo_open problems |