From: | Ravi Krishna <srkrishna1(at)aol(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Christopher Browne <cbbrowne(at)gmail(dot)com>, laurenz(dot)albe(at)cybertec(dot)at, Rob Sargent <robjsargent(at)gmail(dot)com>, PostgreSQL Mailing Lists <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: COPY threads |
Date: | 2018-10-10 21:19:50 |
Message-ID: | 4382AB2D-744F-4A4C-B847-AC3556C2F80E@aol.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thank you. Let me test it and see the benefit. We have a use case for this.
> On Oct 10, 2018, at 17:18 , Andres Freund <andres(at)anarazel(dot)de> wrote:
>
>
>
> On October 10, 2018 2:15:19 PM PDT, Ravi Krishna <srkrishna1(at)aol(dot)com> wrote:
>>>
>>> pg_restore doesn't take locks on the table for the COPY, it does so
>>> because creating the table takes an exclusive lock.
>>
>>
>> Interesting. I seem to recollect reading here that I can't have
>> concurrent COPY on the same table because of the lock.
>> To give an example:
>>
>> If I have a large file with say 400 million rows, can I first split it
>> into 10 files of 40 million rows each and then fire up 10 different
>> COPY sessions , each reading from a split file, but copying into the
>> same table. I thought not. It will be great if we can do this.
>
> Yes, you can.
>
> Andres
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2018-10-11 00:48:30 | Re: tds_fdw binary column |
Previous Message | Andres Freund | 2018-10-10 21:18:10 | Re: COPY threads |