From: | "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> |
---|---|
To: | Craig Ringer <craig(at)2ndquadrant(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
Cc: | Manuel Kniep <m(dot)kniep(at)web(dot)de>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, "fujita(dot)etsuro(at)lab(dot)ntt(dot)co(dot)jp" <fujita(dot)etsuro(at)lab(dot)ntt(dot)co(dot)jp> |
Subject: | Re: foreign table batch inserts |
Date: | 2016-05-19 06:08:29 |
Message-ID: | 0A3221C70F24FB45833433255569204D1F577525@G01JPEXMBYT05 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
From: pgsql-hackers-owner(at)postgresql(dot)org [mailto:pgsql-hackers-owner(at)postgresql(dot)org] On Behalf Of Craig Ringer
On 19 May 2016 at 01:39, Michael Paquier <michael(dot)paquier(at)gmail(dot)com> wrote:
On Wed, May 18, 2016 at 12:27 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> On 18 May 2016 at 06:08, Michael Paquier <michael(dot)paquier(at)gmail(dot)com> wrote:
>> > Wouldn’t it make sense to do the insert batch wise e.g. 100 rows ?
>>
>> Using a single query string with multiple values, perhaps, but after
>> that comes into consideration query string limit particularly for
>> large text values... The query used for the insertion is a prepared
>> statement since writable queries are supported in 9.3, which makes the
>> code quite simple actually.
>
> This should be done how PgJDBC does batches. It'd require a libpq
> enhancement, but it's one we IMO need anyway: allow pipelined query
> execution from libpq.
That's also something that would be useful for the ODBC driver. Since
it is using libpq as a hard dependency and does not speak the protocol
directly, it is doing additional round trips to the server for this
exact reason when preparing a statement.
Yes, I want FE-BE protocol-level batch inserts/updates/deletes, too. I was just about to start thinking of how to implement it because of recent user question in pgsql-odbc. The OP uses Microsoft SQL Server Integration Service (SSIS) to migrate data to PostgreSQL. He asked for a method to speed up multi-row inserts, because the ODBC's multi-row insert API takes as long a time as when performing single-row inserts separately. This may prevent the migration to PostgreSQL.
And it's also useful for ECPG. Our customer wanted ECPG to support multi-row insert to migrate to PostgreSQL, because their embedded-SQL apps use the feature with a commercial database.
If you challenge this feature, I can help you by reviewing and testing, implementing the ODBC and ECPG sides, etc.
Regards
Takayuki Tsunakawa
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2016-05-19 08:34:53 | pg_xlogfile_name_offset() et al and recovery |
Previous Message | Piotr Stefaniak | 2016-05-19 05:48:04 | Re: A couple of cosmetic changes around shared memory code |