From: | Igor Chudov <ichudov(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres INSERT performance and scalability |
Date: | 2011-09-20 01:11:44 |
Message-ID: | CAMhtkAby8zyEr-CdHqp1YtanbAUYFMumrH_ooG_=H4GkQqD7bg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Sep 19, 2011 at 6:32 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>wrote:
> On Mon, Sep 19, 2011 at 4:11 PM, Igor Chudov <ichudov(at)gmail(dot)com> wrote:
> > Let's say that I want to do INSERT SELECT of 1,000 items into a table.
> The
> > table has some ints, varchars, TEXT and BLOB fields.
> > Would the time that it takes, differ a great deal, depending on whether
> the
> > table has only 100,000 or 5,000,000 records?
>
> Depends. Got any indexes? The more indexes you have to update the
> slower it's gonna be. You can test this, it's easy to create test
> rows like so:
>
> insert into test select generate_series(1,10000);
>
> etc. There's lots of examples floating around on how to do that. So,
> make yourself a couple of tables and test it.
>
Well, my question is, rather, whether the time to do a bulk INSERT of N
records into a large table, would take substantially longer than a bulk
insert of N records into a small table. In other words, does the populating
time grow as the table gets more and more rows?
i
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2011-09-20 01:15:51 | Re: Postgres INSERT performance and scalability |
Previous Message | Stephen Frost | 2011-09-20 00:53:42 | Re: Postgres INSERT performance and scalability |