From: | Oswaldo <listas(at)soft-com(dot)es> |
---|---|
To: | psycopg(at)postgresql(dot)org |
Subject: | Re: speed concerns with executemany() |
Date: | 2017-01-02 19:33:35 |
Message-ID: | cdbf56fa-1f82-c60f-f3f3-7f6c636bddf5@soft-com.es |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | psycopg |
El 02/01/17 a las 17:07, Daniele Varrazzo escribió:
> On Mon, Jan 2, 2017 at 4:35 PM, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> wrote:
>>
>> With NRECS=10000 and page size=100:
>>
>> aklaver(at)tito:~> python psycopg_executemany.py -p 100
>> classic: 427.618795156 sec
>> joined: 7.55754685402 sec
>
Hello,
There is a third option that provides a small improvement: generate a
single sql with multple values.
- Test with local database:
classic: 1.53970813751 sec
joined: 0.564052820206 sec
joined values: 0.175103187561 sec
- Test with db on an internet server
classic: 236.342775822 sec
joined: 6.08789801598 sec
joined values: 4.49090409279 sec
I often need to move data between different internet servers (sql server
<-> Postgresql). In my experience this is the fastest way to move
hundreds of thousands of data records.
I attach the sample modified with it executemany3 function.
(Sorry for my bad english)
Regards.
From | Date | Subject | |
---|---|---|---|
Next Message | Oswaldo | 2017-01-02 19:36:24 | Re: speed concerns with executemany() |
Previous Message | Adrian Klaver | 2017-01-02 17:03:23 | Re: Solving the SQL composition problem |