From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, "pgsql-bugs" <pgsql-bugs(at)postgresql(dot)org>, Hans van Kranenburg <hans(dot)van(dot)kranenburg(at)mendix(dot)com> |
Subject: | Re: BUG #5566: High levels of savepoint nesting trigger stack overflow in AssignTransactionId |
Date: | 2010-07-19 19:14:27 |
Message-ID: | 201007192114.28426.andres@anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Monday 19 July 2010 21:03:25 Heikki Linnakangas wrote:
> On 19/07/10 21:32, Andres Freund wrote:
> > On Monday 19 July 2010 20:19:35 Heikki Linnakangas wrote:
> >> On 19/07/10 20:58, Andres Freund wrote:
> >>> On Monday 19 July 2010 19:57:13 Alvaro Herrera wrote:
> >>>> Excerpts from Andres Freund's message of lun jul 19 11:58:06 -0400
2010:
> >>>>> On Monday 19 July 2010 17:26:25 Hans van Kranenburg wrote:
> >>>>>> When issuing an update statement in a transaction with ~30800 levels
> >>>>>> of savepoint nesting, (which is insane, but possible), postgresql
> >>>>>> segfaults due to a stack overflow in the AssignTransactionId
> >>>>>> function, which recursively assign transaction ids to parent
> >>>>>> transactions.
> >>>>>
> >>>>> It seems easy enough to throw a check_stack_depth() in there -
> >>>>> survives make check here.
> >>>>
> >>>> I wonder if it would work to deal with the problem non-recursively
> >>>> instead. We don't impose subxact depth restrictions elsewhere, why
> >>>> start now?
> >>>
> >>> It looks trivial enough, but whats the point?
> >>
> >> To support more than<insert abitrary limit here> subtransactions,
> >> obviously.
> >
> > Well. I got that far. But why is that something worthy of support?
>
> Because it's not really much harder than putting in the limit.
The difference is that you then get errors like:
WARNING: 53200: out of shared memory
LOCATION: ShmemAlloc, shmem.c:190
ERROR: 53200: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
LOCATION: LockAcquireExtended, lock.c:680
STATEMENT: INSERT INTO tstack VALUES(1)
After which pg takes longer to cleanup the transaction than I am willing to
wait (ok ok, thats at an obscene 100k nesting level).
At 50k a single commit takes some minutes as well. (no cassert, -O0)
All that seems pretty annoying to debug...
> Besides, if you put in a limit of 3000, someone with a smaller stack might
> still run out of stack space.
I had left that check there.
Will send a patch, have it locally, just need to verify it.
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Page | 2010-07-19 20:00:21 | Re: PG 9.0 Solaris compile error on Sparc |
Previous Message | Heikki Linnakangas | 2010-07-19 19:03:25 | Re: BUG #5566: High levels of savepoint nesting trigger stack overflow in AssignTransactionId |