From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Alexander Korotkov <aekorotkov(at)gmail(dot)com> |
Cc: | Костя Кузнецов <chapaev28(at)ya(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: gist vacuum gist access |
Date: | 2014-09-08 08:46:29 |
Message-ID: | 540D6CE5.8040505@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 09/08/2014 11:08 AM, Alexander Korotkov wrote:
> On Mon, Sep 8, 2014 at 11:13 AM, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com
>> wrote:
>
>> On 09/07/2014 05:11 PM, Костя Кузнецов wrote:
>>
>>> hello.
>>> i recode vacuum for gist index.
>>> all tests is ok.
>>> also i test vacuum on table size 2 million rows. all is ok.
>>> on my machine old vaccum work about 9 second. this version work about 6-7
>>> sec .
>>> review please.
>>>
>>
>> If I'm reading this correctly, the patch changes gistbulkdelete to scan
>> the index in physical order, while the old code starts from the root and
>> scans the index from left to right, in logical order.
>>
>> Scanning the index in physical order is wrong, if any index pages are
>> split while vacuum runs. A page split could move some tuples to a
>> lower-numbered page, so that the vacuum will not scan those tuples.
>>
>> In the b-tree code, we solved that problem back in 2006, so it can be done
>> but requires a bit more code. In b-tree, we solved it with a "vacuum cycle
>> ID" number that's set on the page halves when a page is split. That allows
>> VACUUM to identify pages that have been split concurrently sees them, and
>> "jump back" to vacuum them too. See commit http://git.postgresql.org/
>> gitweb/?p=postgresql.git;a=commit;h=5749f6ef0cc1c67ef9c9ad2108b3d9
>> 7b82555c80. It should be possible to do something similar in GiST, and in
>> fact you might be able to reuse the NSN field that's already set on the
>> page halves on split, instead of adding a new "vacuum cycle ID".
>
> Idea is right. But in fact, does GiST ever recycle any page? It has
> F_DELETED flag, but ISTM this flag is never set. So, I think it's possible
> that this patch is working correctly. However, probably GiST sometimes
> leaves new page unused due to server crash.
Hmm. It's correctness depends on the fact that when a page is split, the
new right page is always put on a higher-numbered page than the old
page, which hinges on the fact that we never recycle pages.
We used to delete pages in VACUUM FULL, but that got removed when
old-style VACUUM FULL was changed to just rewrite the whole heap. So if
you have pg_upgraded from an old index, it's possible that you still
have some F_DELETED pages in the tree. And then there's the case of
unused pages after crash that you mentioned. So no, you can't really
rely on that. We could of course remove the code that checks the FSM
altogether, and always extend the relation on page split. But of course
the long-term solution is to allow recycling pages.
> Anyway, I'm not fan of committing patch in this shape. We need to let GiST
> recycle pages first, then implement VACUUM similar to b-tree.
+1. Although I guess we could implement the b-tree style strategy first,
and implement page recycling later. It's just a bit hard to test that
it's correct, when there is no easy way to get deleted pages in the
index to begin with.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2014-09-08 08:51:31 | Re: gist vacuum gist access |
Previous Message | Alexander Korotkov | 2014-09-08 08:19:51 | Re: gist vacuum gist access |