From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Ted Toth <txtoth(at)gmail(dot)com> |
Cc: | Brian Crowell <brian(at)fluggo(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: large numbers of inserts out of memory strategy |
Date: | 2017-11-29 14:55:06 |
Message-ID: | 29817.1511967306@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ted Toth <txtoth(at)gmail(dot)com> writes:
> On Tue, Nov 28, 2017 at 9:59 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> So whatever's going on here, there's more to it than a giant client-issued
>> INSERT (or COPY), or for that matter a large number of small ones. What
>> would seem to be required is a many-megabyte-sized plpgsql function body
>> or DO block.
> Yes I did generate 1 large DO block:
Apparently by "large" you mean "hundreds of megabytes". Don't do that,
at least not on a machine that hasn't got hundreds of megabytes to spare.
The entire thing has to be sucked into menory and parsed before anything
will happen.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Marti Raudsepp | 2017-11-29 14:57:32 | Re: seq vs index scan in join query |
Previous Message | Ted Toth | 2017-11-29 14:32:02 | Re: large numbers of inserts out of memory strategy |