Re: Insert into on conflict, data size upto 3 billion records

From: Rob Sargent <robjsargent(at)gmail(dot)com>
To: Karthik K <kar6308(at)gmail(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Insert into on conflict, data size upto 3 billion records
Date: 2021-02-15 19:47:11
Message-ID: 6075918d-c07d-7a29-aecc-95e0b160033a@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 2/15/21 12:22 PM, Karthik K wrote:
> yes, I'm using \copy to load the batch table,
>
> with the new design that we are doing, we expect updates to be less
> going forward and more inserts, one of the target columns I'm updating
> is indexed, so I will drop the index and try it out, also from your
> suggestion above splitting the on conflict into insert and update is
> performant but in order to split the record into batches( low, high) I
> need to do a count of primary key on the batch tables to first split it
> into batches
>
>
I don't think you need to do a count per se. If you know the
approximate range (or better, the min and max) in the incoming/batch
data you can approximate the range values.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Thomas Guyot 2021-02-15 20:34:03 Re: How to post to this mailing list from a web based interface
Previous Message Loles 2021-02-15 19:40:41 Re: Replication sequence