From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Tomasz Ostrowski <tometzky+pg(at)ato(dot)waw(dot)pl> |
Cc: | PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: Invalid indexes should not consume update overhead |
Date: | 2016-07-17 00:09:21 |
Message-ID: | CAM-w4HN00MFSMwi+CF-Z9TFG4Xj8gq35zvN0yD=XT-X9y0z0Wg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I can't disagree with your conclusion but I can offer a bit of perspective
of how the current situation came about.
Invalid indexes are in the same state they're in while a concurrent index
build is in progress. As far as other queries are concerned they're
effectively assuming the index build is still in progress and will still
eventually be completed.
They could maybe determine that's not the case but then that would be an
extra check for them to do in the normal case so not necessarily a win.
The real solution imho is to actually clean up failed index builds when a
build fails. That's what normal transactions do when they abort after all.
This was always the intention but looked like it was going to be a pain and
was put off (ie I was lazy). It's probably just several layers of
PG_TRY/PG_CATCH and closing the failed transactions and opening new ones.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2016-07-17 00:58:47 | Re: Invalid indexes should not consume update overhead |
Previous Message | Peter Geoghegan | 2016-07-16 20:56:08 | Re: BUG #14210: filter by "=" constraint doesn't work when hash index is present on a column |