From: | "Matthew Draper" <matthew(at)trebex(dot)net> |
---|---|
To: | pgsql-bugs(at)postgresql(dot)org |
Subject: | BUG #2528: Memory leak on PL/pgSQL compilation error |
Date: | 2006-07-13 07:34:15 |
Message-ID: | 200607130734.k6D7YFX3051921@wwwmaster.postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
The following bug has been logged online:
Bug reference: 2528
Logged by: Matthew Draper
Email address: matthew(at)trebex(dot)net
PostgreSQL version: 8.1.3, 8.1.0
Operating system: Debian Stable, Windows 2000
Description: Memory leak on PL/pgSQL compilation error
Details:
Hi!
There's a fairly obvious work-around for this (don't repeatedly call
syntactically invalid functions), but I thought I'd better report it
anyway.
Set up:
CREATE DATABASE leak_test;
\c leak_test
CREATE LANGUAGE PLPGSQL;
SET check_function_bodies TO off;
CREATE FUNCTION runtime_error() RETURNS void AS $BODY$
DECLARE
foo varchar;
BEGIN
foo := 'something';
FOR bar IN SELECT 17 LOOP
foo := bar.fred;
END LOOP;
RETURN;
END
$BODY$ LANGUAGE PLPGSQL VOLATILE;
Now, use your method of choice to call that function ("SELECT
runtime_error();") many times -- I saw a very measurable effect after a few
thousand, and several hundred MB after 200 000 -- all in one connection; I
just created a file containing that line repeated 100000 times, and used
"\i" to pull it in. Memory usage of that connection's backend seems to grow
linearly. Closing the connection (by exiting psql) frees the memory. As
we're not in a transaction, I would expect memory usage on the server to be
relatively constant.
This came to my attention because we had a script that maintained a
long-running connection to a Windows 2000 server running PostgreSQL 8.1.0,
and called a PL/pgSQL function every few seconds; apparently, the function
was changed, and an error introduced. Some time later, the database begin
having OOM errors. I can provide the full context dump, or whatever it is,
that PG was logging in response to those errors, but I suspect that the
point of interest is the following line, which was repeated consecutively
35952 times for each OOM error:
PL/PgSQL function context: 57344 total in 3 blocks; 30696 free (4 chunks);
26648 used
I have the logs for a period just over an hour, in which time the system
reported 139 OOM errors. The stitched file totals 417 MB, though it
obviously compresses very well. Before I killed it, the connection backend's
memory usage was sitting in the 800-900 MB range, which is suspiciously
similar to (35952 * 26648).
As indicated in the version and OS, I've used the above to reproduce this on
a built-from-source PostgreSQL 8.1.3 running on Debian Stable. I can dig out
hardware specs for both machines if needed... I believe that the Windows box
is a quad Xeon with 4 GB RAM, and know the Debian system is a Celeron with
512 MB.
An upgrade to 8.1.4 has been on my TODO list for a while, but I'm not likely
to have an opportunity in the immediate future.
Thanks!
Matthew
From | Date | Subject | |
---|---|---|---|
Next Message | Gosia | 2006-07-13 08:35:28 | BUG #2529: it doesn't start- the log files are not found |
Previous Message | Michael Fuhr | 2006-07-13 03:48:06 | Re: BUG #2526: records lost |