Re: Removing duplicate records from a bulk upload (rationale behind selecting a method)

From: David G Johnston <david(dot)g(dot)johnston(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Removing duplicate records from a bulk upload (rationale behind selecting a method)
Date: 2014-12-12 18:40:27
Message-ID: 1418409627311-5830353.post@n5.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

John McKown wrote
> I don't
> know, myself, why this would be faster. But I'm not any kind of a
> PostgreSQL expert either.

It is faster because PostgreSQL does not have native parallelism. By using
a%n in a where clause you can start n separate sessions and choose a
different value of n for each one and manually introduce parallelism into
the activity.

Though given this is going to likely be I/O constrained the possible gains
do not scale lineally with the number of sessions - which themselves
effectively max out at the number of cores available to the server.

David J.

--
View this message in context: http://postgresql.nabble.com/Re-Removing-duplicate-records-from-a-bulk-upload-rationale-behind-selecting-a-method-tp5829682p5830353.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Marc Mamin 2014-12-12 19:25:13 Re: Removing duplicate records from a bulk upload (rationale behind selecting a method)
Previous Message Bruce Momjian 2014-12-12 18:10:18 Re: anyone using oid2name?