From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Leandro Guimarães <leo(dot)guimaraes(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_bulkload sequential |
Date: | 2020-11-07 17:53:43 |
Message-ID: | a2fbc0f6-88ac-f855-d7bd-167f4ec62ae4@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/7/20 9:28 AM, Leandro Guimarães wrote:
> Hello,
> I have a process using pg_bulkload and sometimes i have duplicated
> keys in my csv file that pg_bulkload uses.
>
> My question is: pg_bulkload insert it in sequential order ?
>
> Example, if i have the following csv file:
>
> key_1;0.00
> key_1;100.00
>
> And use the ON_DUPLICATE_KEEP = NEW in .ctl file, it's guaranteed
> that the value 0.00 will be overwritten with 100.00? Or pg_bulkload
> can't guarantee this order?
Assuming they are in that order in the file and you are using DIRECT
mode I would say that would be the case. In PARALLEL mode, who knows?
In any case I would be dubious of any process that overwrites and
depends strictly on ordering to do the right thing. You are putting a
lot of confidence in the data in the CSV file being correctly ordered.
>
> Thanks!
> Leandro Guimarães
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Dhinakaran R | 2020-11-07 18:42:39 | Not able to set pgaudit.log with pgaudit 1.3.2 in PostgreSQL 11.9 |
Previous Message | Leandro Guimarães | 2020-11-07 17:28:09 | pg_bulkload sequential |