From: | Daniel Popowich <danielpopowich(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | plpythonu memory leak |
Date: | 2011-01-15 02:14:02 |
Message-ID: | 19761.746.876024.922654@io.astro.umass.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I am working with very large sets of time-series data. Imagine a
table with a timestamp as the primary key. One question I need to ask
of my data is: Are there gaps of time greater than some interval
between consecutive rows?
I wrote a function in plpgsql to answer this question and it worked
great. Being a python zealot I decided to rewrite the function in
plpythonu to compare performance. While initial comparisons seemed
inconclusive, after testing on large queries (over a million records)
I discovered ever-increasing time to complete the exact same query and
massive memory growth in my postgres process to the point of memory
starvation in under 15 queries.
I've reduced my my schema to one table with one timestamp column, one
type and two functions in a schema named plpythonu_bug and saved with:
`pg_dump -n plpythonu_bug -s -O > bug.sql`. It is attached.
Here are some statistics on two separate psql sessions, one where I
ran this plpgsql function several times:
EXPLAIN ANALYZE SELECT count(*) from gaps('2008-01-01',
'2010-01-01', '1 min');
Then a second session running the exact same query but with the
plpythonu function, pygaps.
Note: I had over 273,000 rows in my table. The function returned
5103 rows each run. Memory usage is from `top` output.
Milliseconds, from output of explain analyze. This is on an Ubuntu
10.04 system w/ 2GB RAM, postgres 8.4.6, python 2.6.5.
plpgsql function
----------------
Run # Virt Res ms
before 101m 3500 n/a
1 103m 17m 584
2 104m 17m 561
3 104m 18m 579
...etc... (virtually no movement over several runs)
plpythonu function
------------------
Run # Virt Res ms
before 101m 3492 n/a
1 213m 122m 1836
2 339m 246m 1784
3 440m 346m 2178
...and so on, about 100m or so increase with each run such that in a
dozen or so runs I had 1.5g in resident memory and single calls to the
function taking over 45 seconds.
My schema is attached.
Thanks for any help and insight,
Dan Popowich
Attachment | Content-Type | Size |
---|---|---|
unknown_filename | text/plain | 2.7 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Jasen Betts | 2011-01-15 02:49:41 | Re: Time Series on Postgres (HOWTO?) |
Previous Message | Whit Armstrong | 2011-01-15 00:49:09 | Re: Time Series on Postgres (HOWTO?) |