From: | Florian Weimer <fweimer(at)redhat(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Pipelining INSERTs using libpq |
Date: | 2012-12-21 10:31:25 |
Message-ID: | 50D43A7D.1030406@redhat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I would like to pipeline INSERT statements. The idea is to avoid
waiting for server round trips if the INSERT has no RETURNING clause and
runs in a transaction. In my case, the failure of an individual INSERT
is not particularly interesting (it's a "can't happen" scenario, more or
less). I believe this is how the X toolkit avoided network latency issues.
I wonder what's the best way to pipeline requests to the server using
the libpq API. Historically, I have used COPY FROM STDIN instead, but
that requires (double) encoding and some client-side buffering plus
heuristics if multiple tables are filled.
It does not seem possible to use the asynchronous APIs for this purpose,
or am I missing something?
--
Florian Weimer / Red Hat Product Security Team
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Pires de Camargo | 2012-12-21 10:46:14 | Composite Indexes with a function and a column |
Previous Message | Tom Lane | 2012-12-21 05:05:26 | Re: Using POSIX Regular Expressions on xml type fields gives inconsistent results |