Re: Postgres INSERT performance and scalability

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Igor Chudov <ichudov(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres INSERT performance and scalability
Date: 2011-09-19 23:32:29
Message-ID: CAOR=d=3YBs==367X6HpHM82YybfpxRHTirvrmEejrtzEoD6thg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Sep 19, 2011 at 4:11 PM, Igor Chudov <ichudov(at)gmail(dot)com> wrote:
> Let's say that I want to do INSERT SELECT of 1,000 items into a table. The
> table has some ints, varchars, TEXT and BLOB fields.
> Would the time that it takes, differ a great deal, depending on whether the
> table has only 100,000 or 5,000,000 records?

Depends. Got any indexes? The more indexes you have to update the
slower it's gonna be. You can test this, it's easy to create test
rows like so:

insert into test select generate_series(1,10000);

etc. There's lots of examples floating around on how to do that. So,
make yourself a couple of tables and test it.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Stephen Frost 2011-09-20 00:53:42 Re: Postgres INSERT performance and scalability
Previous Message Igor Chudov 2011-09-19 22:11:11 Postgres INSERT performance and scalability