From: | Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com> |
---|---|
To: | Dorian Hoxha <dorian(dot)hoxha(at)gmail(dot)com> |
Cc: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, Christophe Pettus <xof(at)thebuild(dot)com>, mike bayer <mike_mp(at)zzzcomputing(dot)com>, "psycopg(at)postgresql(dot)org" <psycopg(at)postgresql(dot)org> |
Subject: | Re: speed concerns with executemany() |
Date: | 2016-12-30 22:24:30 |
Message-ID: | CA+mi_8bOtDmg=1093SMnjsSjb62Ez5Riap0xbV30BuEXQ=R4yA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | psycopg |
The implementation of executemany as described by me a few days ago is
available in this gist, not heavily tested:
https://gist.github.com/dvarrazzo/4204cca5d1cd7c9c95bc814d81e6b83e
I would like to know if anyone sees any shortcoming in this new implementation.
-- Daniele
On Sun, Dec 25, 2016 at 10:11 AM, Dorian Hoxha <dorian(dot)hoxha(at)gmail(dot)com> wrote:
> Sending stuff in big-batches + autocommit (fast transactions) + few network
> calls is performance 101 I thought. I think the "executemany" should be
> documented what it does (it looked suspicious when I saw it long time ago,
> why I didn't use it).
>
> On Sat, Dec 24, 2016 at 6:00 AM, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
> wrote:
>>
>> On 12/23/2016 06:57 PM, Christophe Pettus wrote:
>>>
>>>
>>>> On Dec 23, 2016, at 18:55, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
>>>> wrote:
>>>> Alright that I get. Still the practical outcome is each INSERT is being
>>>> done in a transaction (an implicit one) so the transaction overhead comes
>>>> into play. Or am I missing something?
>>>
>>>
>>> Nope, not missing a thing. The theory (and it is only that) is that when
>>> they do the .executemany(), each of those INSERTs pays the transaction
>>> overhead, while if they do one big INSERT, just that one statement does.
>>
>>
>> Just ran a quick and dirty test using IPython %timeit.
>>
>> With a list of 200 tuples each which had 3 integers INSERTing into:
>> test=> \d psycopg_table
>> Table "public.psycopg_table"
>> Column | Type | Modifiers
>> --------+---------+-----------
>> a | integer |
>> b | integer |
>> c | integer |
>>
>>
>> The results where:
>>
>> sql = "INSERT INTO psycopg_table VALUES(%s, %s, %s)"
>>
>> Without autocommit:
>>
>> In [65]: timeit -n 10 cur.executemany(sql, l)
>> 10 loops, best of 3: 12.5 ms per loop
>>
>>
>> With autocommit:
>>
>> In [72]: timeit -n 10 cur.executemany(sql, l)
>> 10 loops, best of 3: 1.71 s per loop
>>
>>
>>>
>>> --
>>> -- Christophe Pettus
>>> xof(at)thebuild(dot)com
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian(dot)klaver(at)aklaver(dot)com
>>
>>
>>
>> --
>> Sent via psycopg mailing list (psycopg(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/psycopg
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Christophe Pettus | 2016-12-30 23:42:08 | Re: speed concerns with executemany() |
Previous Message | Dorian Hoxha | 2016-12-25 09:11:09 | Re: speed concerns with executemany() |