From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Yan Cheng CHEOK <yccheok(at)yahoo(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Fast Insert and Update (through COPY) |
Date: | 2010-12-15 17:25:20 |
Message-ID: | AANLkTinjaO1bACVJVpA4HLosaWn0aBrhiF_jarmv--py@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Dec 15, 2010 at 10:17 AM, Yan Cheng CHEOK <yccheok(at)yahoo(dot)com> wrote:
> Previously, I am inserting a bulk of data by using the following way.
>
> for each item
> update item into table
> if row count is 0
> insert items into table
>
> I realize I am suffering performance problem.
>
> Later, I realize it is much fast by using
>
> COPY
>
> I am very happy with the speed.
>
> Later, I realize that COPY doesn't work well, if I already have a row with same unique key. What I did is
>
> # Try to remove old rows first
> delete row where <condition>
> # And perform really fast insertion
> COPY
>
> I was wondering, is this a common technique being use for fast bulk data insertion? Is there other techniques.
Copy your data to a scratch table:
create temp table scratch (like foo);
copy scratch...
then do the update:
delete from foo where exists (select * from scratch where something);
then
insert into foo select * from scratch;
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Radosław Smogura | 2010-12-15 17:51:26 | Re: Changing table owner to db owner. |
Previous Message | felipe@informidia.com.br | 2010-12-15 17:10:53 | Is possible to use Prepare/Execute inside a function? |