| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | PFC <lists(at)peufeu(dot)com> |
| Cc: | "James William Pye" <pgsql(at)jwp(dot)name>, Hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: pg_proc probin misuse |
| Date: | 2006-05-29 13:54:02 |
| Message-ID: | 13001.1148910842@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
PFC <lists(at)peufeu(dot)com> writes:
>> If it were really expensive to derive bytecode from source text
>> then maybe it'd make sense to do what you're doing, but surely that's
>> not all that expensive. Everyone else manages to parse prosrc on the
>> fly and cache the result in memory; why isn't plpython doing that?
> It depends on the number of imported modules in the function. If it
> imports a lot of modules, it can take some time to compile a python
> function (especially if the modules have some initialisation code which
> must be run on import).
Surely the initialization code would have to be run anyway ... and if
the function does import a pile of modules, do you really want to cache
all that in its pg_proc entry? What happens if some of the modules get
updated later?
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2006-05-29 14:47:38 | Re: non-transactional pg_class |
| Previous Message | Bruce Momjian | 2006-05-29 13:49:36 | Re: some question about deadlock |