From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: Log inability to lock pages during vacuum |
Date: | 2014-10-21 23:00:07 |
Message-ID: | 5446E577.3060801@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 10/21/14, 5:39 PM, Alvaro Herrera wrote:
> Jim Nasby wrote:
>
>> Currently, a non-freeze vacuum will punt on any page it can't get a
>> cleanup lock on, with no retry. Presumably this should be a rare
>> occurrence, but I think it's bad that we just assume that and won't
>> warn the user if something bad is going on.
>
> I think if you really want to attack this problem, rather than just
> being noisy about it, what you could do is to keep a record of which
> page numbers you had to skip, and then once you're done with your first
> scan you go back and retry the lock on the pages you skipped.
I'm OK with that if the community is; I was just trying for minimum invasiveness.
If I go this route, I'd like some input though...
- How to handle storing the blockIDs. Fixed size array or something fancier? What should we limit it to, especially since we're already allocating maintenance_work_mem for the tid array.
- What happens if we run out of space to remember skipped blocks? I could do something like what we do for running out of space in the dead_tuples array, but I'm worried that will add a serious amount of complexity, especially since re-processing these blocks could be what actually pushes us over the limit.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-10-21 23:05:23 | Re: Proposal: Log inability to lock pages during vacuum |
Previous Message | Jim Nasby | 2014-10-21 22:51:08 | Spurious set in heap_prune_chain() |