Re: Pipelining INSERTs using libpq

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Florian Weimer <fweimer(at)redhat(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Pipelining INSERTs using libpq
Date: 2012-12-21 14:29:46
Message-ID: CAHyXU0w1+H-FZ6F-WC68EE8ra8C11vU94TmynWFUDs70VNaOBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Dec 21, 2012 at 4:31 AM, Florian Weimer <fweimer(at)redhat(dot)com> wrote:
> I would like to pipeline INSERT statements. The idea is to avoid waiting
> for server round trips if the INSERT has no RETURNING clause and runs in a
> transaction. In my case, the failure of an individual INSERT is not
> particularly interesting (it's a "can't happen" scenario, more or less). I
> believe this is how the X toolkit avoided network latency issues.
>
> I wonder what's the best way to pipeline requests to the server using the
> libpq API. Historically, I have used COPY FROM STDIN instead, but that
> requires (double) encoding and some client-side buffering plus heuristics if
> multiple tables are filled.
>
> It does not seem possible to use the asynchronous APIs for this purpose, or
> am I missing something?

How you attack this problem depends a lot on if all your data you want
to insert is available at once or you have to wait for it from some
actor on the client side. The purpose of asynchronous API is to allow
client side work to continue while the server is busy with the query.
So they would only help in your case if there was some kind of other
processing you needed to do to gather the data and/or prepare the
queries. Maybe then you'd PQsend multiple insert statements with a
single call.

merlin

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message jg 2012-12-21 14:55:58 Heavy Function Optimisation
Previous Message Alex Pires de Camargo 2012-12-21 13:07:57 Re: Composite Indexes with a function and a column