Re: How batch processing works

From: "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: How batch processing works
Date: 2024-09-22 19:36:12
Message-ID: 20240922193612.6q4f6w2gzf7ruu3l@hjp.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 2024-09-21 12:15:44 -0700, Adrian Klaver wrote:
> FYI, this is less of problem with psycopg(3) and pipeline mode:
>
[...]
> with db.pipeline():
> for i in range(1, num_inserts+1):
> csr.execute("insert into parent_table values(%s, %s)", (i, 'a'))
> if i % batch_size == 0:
> db.commit()
> db.commit()
[...]
>
> For remote to a database in another state that took the time from:
>
> Method 2: Individual Inserts with Commit after 50 Rows: 2.42e+02 seconds
>
> to:
>
> Method 2: Individual Inserts(psycopg3 pipeline mode) with Commit after 50
> Rows: 9.83 seconds

Very cool. I'll keep that in mind.

I've been using psycopg 3 for newer projects, but for throwaway code
I've been sticking to psycopg2, simply because it's available from the
repos of all my usual distributions. It's now in both Debian and Ubuntu,
so that will change.

hp

--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Peter J. Holzer 2024-09-22 19:47:25 Re: glibc updarte 2.31 to 2.38
Previous Message Peter J. Holzer 2024-09-22 19:23:21 Re: How batch processing works