From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Tricky bugs in concurrent index build |
Date: | 2006-08-23 12:35:03 |
Message-ID: | 17859.1156336503@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg Stark <gsstark(at)mit(dot)edu> writes:
> But then wouldn't we have deadlock risks? If we come across these records in a
> different order from someone else (possibly even the deleter) who also wants
> to lock them? Or would it be safe to lock and release them one by one so we
> only every hold one lock at a time?
AFAICS we could release the lock as soon as we've inserted the index
entry. (Whether there is any infrastructure to do that is another
question...)
> I'm also pondering whether it might be worth saving up all the
> DELETE_IN_PROGRESS tuples in a second tuplesort and processing them all in a
> third phase. That seems like it would reduce the amount of waiting that might
> be involved. The fear I have though is that this third phase could become
> quite large.
Actually --- a tuple that is live when we do the "second pass" scan
could well be DELETE_IN_PROGRESS (or even RECENTLY_DEAD) by the time we
do the merge and discover that it hasn't got an index entry. So offhand
I'm thinking that we *must* take a tuple lock on *every* tuple we insert
in stage two. Ugh.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-08-23 12:44:18 | Re: Tricky bugs in concurrent index build |
Previous Message | Greg Stark | 2006-08-23 12:29:53 | Re: Tricky bugs in concurrent index build |