From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jan Urbański <wulczer(at)wulczer(dot)org> |
Cc: | Postgres - Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pl/python long-lived allocations in datum->dict transformation |
Date: | 2012-02-11 23:48:23 |
Message-ID: | 24229.1329004103@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <wulczer(at)wulczer(dot)org> writes:
> This is annoying for functions that plough through large tables, doing
> some calculation. Attached is a patch that does the conversion of
> PostgreSQL Datums into Python dict objects in a scratch memory context
> that gets reset every time.
As best I can tell, this patch proposes creating a new, separate context
(chewing up 8KB+) for every plpython procedure that's ever used in a
given session. This cure could easily be worse than the disease as far
as total space consumption is concerned. What's more, it's unclear that
it won't malfunction altogether if the function is used recursively
(ie, what if PLyDict_FromTuple ends up calling the same function again?)
Can't you fix it so that the temp context is associated with a
particular function execution, rather than being "statically" allocated
per-function?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2012-02-12 00:49:27 | Re: random_page_cost vs seq_page_cost |
Previous Message | Andrew Dunstan | 2012-02-11 21:56:15 | Re: auto_explain produces invalid JSON |