From: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | t-ishii(at)sra(dot)co(dot)jp, postgres <hackers(at)postgreSQL(dot)org> |
Subject: | Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc |
Date: | 1999-05-03 14:08:54 |
Message-ID: | 199905031408.XAA04842@ext16.sra.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Hmm. The documentation does say somewhere that LO object handles are
> only good within a transaction ... so it's amazing this worked reliably
> under 6.4.x.
>
> Is there any way we could improve the backend's LO functions to defend
> against this sort of misuse, rather than blindly accepting a stale
> filehandle?
It should not be very difficult. We could explicitly close LO
filehandles on commits.
But I'm now not confident on this. From comments in be-fsstubs.c:
>Builtin functions for open/close/read/write operations on large objects.
>These functions operate in the current portal variable context, which
>means the large object descriptors hang around between transactions and
>are not deallocated until explicitly closed, or until the portal is
>closed.
If above is true, LO filehandles should be able to survive between
transactions.
Following data are included in them. My question is: Can these data
survive between transactions? I guess not.
typedef struct LargeObjectDesc
{
Relation heap_r; /* heap relation */
Relation index_r; /* index relation on seqno attribute */
IndexScanDesc iscan; /* index scan we're using */
TupleDesc hdesc; /* heap relation tuple desc */
TupleDesc idesc; /* index relation tuple desc */
[snip]
From | Date | Subject | |
---|---|---|---|
Next Message | Frank Morton | 1999-05-03 14:10:06 | Re: [SQL] Slow Inserts Again |
Previous Message | Michael Contzen | 1999-05-03 12:53:40 | an older problem? hash table out of memory |