| From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
|---|---|
| To: | Joachim Worringen <joachim(dot)worringen(at)iathh(dot)de> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: INSERTing lots of data |
| Date: | 2010-05-28 12:55:39 |
| Message-ID: | 4BFFBD4B.3070708@postnewspapers.com.au |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On 28/05/10 17:41, Joachim Worringen wrote:
> Greetings,
>
> my Python application (http://perfbase.tigris.org) repeatedly needs to
> insert lots of data into an exsting, non-empty, potentially large table.
> Currently, the bottleneck is with the Python application, so I intend to
> multi-thread it.
That may not be a great idea. For why, search for "Global Interpreter
Lock" (GIL).
It might help if Python's mostly blocked on network I/O, as the GIL is
released when Python blocks on the network, but still, your results may
not be great.
> will I get a speedup? Or will table-locking serialize things on the
> server side?
Concurrent inserts work *great* with PostgreSQL, it's Python I'd be
worried about.
--
Craig Ringer
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Wilcox | 2010-05-28 12:59:40 | Out of Memory and Configuration Problems (Big Computer) |
| Previous Message | Giles Lean | 2010-05-28 12:40:31 | Re: hi, trying to compile postgres 8.3.11 |