Re: How to insert a bulk of data with unique-violations very fast

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Torsten Zühlsdorff <foo(at)meisterderspiele(dot)de>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: How to insert a bulk of data with unique-violations very fast
Date: 2010-06-02 19:59:23
Message-ID: AANLkTimwLi6PkFYltlyXecsDQaI6yD3y22WLmWdgwe_R@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff
<foo(at)meisterderspiele(dot)de> wrote:
> Hello,
>
> i have a set of unique data which about 150.000.000 rows. Regullary i get a
> list of data, which contains multiple times of rows than the already stored
> one. Often around 2.000.000.000 rows. Within this rows are many duplicates
> and often the set of already stored data.
> I want to store just every entry, which is not within the already stored
> one. Also i do not want to store duplicates. Example:

The standard method in pgsql is to load the data into a temp table
then insert where not exists in old table.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2010-06-02 20:12:15 Re: Autovacuum in postgres.
Previous Message Wales Wang 2010-06-02 15:29:30 Re: File system choice for Red Hat systems