From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jaime Casanova <jaime(at)2ndquadrant(dot)com> |
Cc: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Jeff Davis <pgsql(at)j-davis(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Reduce WAL logging of INSERT SELECT |
Date: | 2011-08-06 16:23:38 |
Message-ID: | 14069.1312647818@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jaime Casanova <jaime(at)2ndquadrant(dot)com> writes:
> On Sat, Aug 6, 2011 at 11:05 AM, Heikki Linnakangas
> <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
>> It can be very helpful when loading a lot of data, so I'm not in favor of
>> removing it altogether. Maybe WAL-log the first 10000 rows or such normally,
>> and skip WAL after that. Of course, loading 10001 rows becomes the worst
>> case then, but something along those lines...
> why 10000 rows?
Yeah; any particular number is wrong. Perhaps it'd be better to put the
behavior under user control. In the case of COPY, where we already have
a place to stick random options, you could imagine writing something
like
COPY ... WITH (bulk)
to cue the system that a lot of data is coming in. But I don't see any
nice way to do something similar for INSERT/SELECT. I hesitate to
suggest a GUC, but something like "SET bulk_load = on" would be pretty
straightforward to use in pg_dump for instance.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Girault | 2011-08-06 16:45:32 | Module extension for parsing and rewriting functions with infixed syntax |
Previous Message | Kevin Grittner | 2011-08-06 16:21:41 | Re: Reduce WAL logging of INSERT SELECT |