From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: strange OOM errors with EXECUTE in PL/pgSQL |
Date: | 2012-12-20 01:29:16 |
Message-ID: | 27084.1355966956@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tomas Vondra <tv(at)fuzzy(dot)cz> writes:
> What it does:
> 1) creates a simple table called "test" with one text column.
> 2) creates a plpgsql function with one parameter, and all that function
> does is passing the parameter to EXECUTE
> 3) calls the function with a string containing many INSERTs into the
> test table
> The way the EXECUTE is used is a bit awkward, but the failures seem a
> bit strange to me. The whole script is ~500kB and most of that is about
> 11k of very simple INSERT statements:
> insert into test(value) values (''aaaaaaaaaa'');
The reason this fails is that you've got a half-megabyte source string,
and each of the 11000 plans that are due to be created from it saves
its own copy of the source string. Hence, 5500 megabytes needed just
for source strings.
We could possibly fix this by inventing some sort of reference-sharing
arrangement (which'd be complicated and fragile) or by not storing the
source strings with the plans (which'd deal a serious blow to our
ability to provide helpful error messages). Neither answer seems
appealing.
I think it would be a better idea to adopt a less brain-dead way of
processing the data. Can't you convert this to a single INSERT with a
lot of VALUES rows? Or split it into multiple EXECUTE chunks?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2012-12-20 01:54:48 | Re: PATCH: optimized DROP of multiple tables within a transaction |
Previous Message | Tom Lane | 2012-12-20 00:51:19 | operator dependency of commutator and negator, redux |