Re: are there any methods to disable updating index before inserting large number tuples?

From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-general(at)postgresql(dot)org, sunpeng <bluevaley(at)gmail(dot)com>
Cc: John R Pierce <pierce(at)hogranch(dot)com>
Subject: Re: are there any methods to disable updating index before inserting large number tuples?
Date: 2011-11-22 18:53:36
Message-ID: 201111221953.36954.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

On Tuesday 22 Nov 2011 19:01:02 John R Pierce wrote:
> On 11/22/11 7:52 AM, Andrew Sullivan wrote:
> > But I think performance on that table is going to be pretty bad. I
> > suspect that COPY is going to be your friend here.
>
> indeed. 20M rows/hour is 5500 rows/second. you'd better have a
> seriously fast disk system, say, 20 15k RPM SAS drives in a RAID10 with
> a decent SAS raid controller that has 1GB of writeback battery-or-flash
> backed cache.
20M rows inserted inside one transaction doesn't cause *that* many writes. I
guess the bigger problem than the actual disk throughput because of heap/wal
writes will be the index size once the table gets bigger. As soon as that
reaches a size bigger than the available shared buffers the performance will
suffer rather much.
For that you probably need a sensible partitioning strategy... Which is likely
to be important anyway to be able to throw away old data efficiently.

Using COPY is advantageous in to using INSERT because it can do some operation
in a bulk mode which INSERT cannot do.

How wide will those rows be, how long do you plan to store the data, how are
you querying it?
Andres

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Kenneth Tilton 2011-11-22 18:56:04 Re: possible race condition in trigger functions on insert operations?
Previous Message Kenneth Tilton 2011-11-22 18:51:41 Re: possible race condition in trigger functions on insert operations?