From: | Florian Weimer <fweimer(at)redhat(dot)com> |
---|---|
To: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Pipelining INSERTs using libpq |
Date: | 2012-12-23 11:56:42 |
Message-ID: | 50D6F17A.5030809@redhat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12/21/2012 03:29 PM, Merlin Moncure wrote:
> How you attack this problem depends a lot on if all your data you want
> to insert is available at once or you have to wait for it from some
> actor on the client side. The purpose of asynchronous API is to allow
> client side work to continue while the server is busy with the query.
The client has only very little work to do until the next INSERT.
> So they would only help in your case if there was some kind of other
> processing you needed to do to gather the data and/or prepare the
> queries. Maybe then you'd PQsend multiple insert statements with a
> single call.
I want to use parameterized queries, so I'll have to create an INSERT
statement which inserts multiple rows. Given that it's still
stop-and-wait (even with PQsendParams), I can get through at most one
batch per RTT, so the number of rows would have to be rather large for a
cross-continental bulk load. It's probably doable for local bulk loading.
Does the wire protocol support pipelining? The server doesn't have to
do much to implement it. It just has to avoid discarding unexpected
bytes after the current frame and queue it for subsequent processing
instead.
(Sorry if this message arrives twice.)
--
Florian Weimer / Red Hat Product Security Team
From | Date | Subject | |
---|---|---|---|
Next Message | Heine Ferreira | 2012-12-23 16:54:49 | downgrading a database |
Previous Message | Robert Treat | 2012-12-23 04:08:29 | Re: Default timezone changes in 9.1 |