From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | Neil Conway <neilc(at)samurai(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers(at)postgresql(dot)org, Teodor Sigaev <teodor(at)sigaev(dot)ru> |
Subject: | Re: Warning on contrib/tsearch2 |
Date: | 2007-03-28 03:08:17 |
Message-ID: | 13748.1175051297@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> Tom Lane wrote:
>> ... random backend code should not, not, not be using fopen()
>> directly. If you lose control to an elog, which is certainly possible
>> seeing that this loop calls into the utils/mb subsystem, you'll leak
>> the file descriptor. Use AllocateFile/FreeFile instead of fopen/fclose.
> Does that apply to things like plperlu?
For stuff that executes in the regular backend environment, yes.
Anyplace you are calling code that could potentially throw an elog(),
you'd better play by the rules.
I'm prepared to believe that libperl's error handling and resource
management conventions were designed by someone who knew what they were
doing --- so in code called from libperl, you need to follow the libperl
coding rules, instead. And if you're calling back into the main backend
from a libperl subroutine, you need to provide an impedance match ---
like a subtransaction controlled by a PG_TRY block.
>> I'm halfway tempted to change postgres.h to #define these functions to
>> yield errors, and only allow #undef'ing them in the files that are
>> supposed to access the C library functions directly.
> not a bad idea :-)
Not something to try now, but maybe near the beginning of a devel cycle
...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-03-28 03:12:56 | Re: pg_index updates and SI invalidation |
Previous Message | Josh Berkus | 2007-03-28 02:55:41 | Re: Patch queue concern |