From: | "Victor Y(dot) Yegorov" <viy(at)mits(dot)lv> |
---|---|
To: | Postgres Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | adding new pages bulky way |
Date: | 2005-06-06 19:59:04 |
Message-ID: | 20050606195904.GA9502@mits.lv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I need your advice.
For on-disk bitmap I run a list of TIDs.
TIDs are stored in pages as an array, page's opaque data holds an array of
bits, indicating whether corresponding TID has been deleted and should be
skipped during the scan.
Pages, that contain TIDs list, are organized in extents, each extent has 2^N
pages, where N is extent's number (i.e. 2nd extent will occupy 4 pages).
Given that I know number of TIDs, that fit into one page, and the TID's
sequential number, I can easily calculate:
- extent number TID belongs to;
- page offset inside that extent, and;
- TID place in the page.
At the moment, I store BlockNumber of the extent's first page in the
metapage and allocate all pages that belongs to that extent sequentially. I
need to do so to minimize number of page reads when searching for the TID in
the list; I'll need to read 1 page at most to find out TID at given position
during the scan. I hope you understood the idea.
This also means, that while extent's pages are being added this way, no other
pages can be added to the index. And the higher is extent's number, the more
time it'll take to allocate all pages.
The question is: allocating pages this way is really ugly, I understand. Is
there some API that would allow allocating N pages in the bulk way?
Maybe this is a know problem, that has been already solved before?
Any other ideas?
Thanks in advance!
--
Victor Y. Yegorov
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2005-06-06 21:26:04 | Re: Suggestion: additional system views |
Previous Message | Edward Peschko | 2005-06-06 19:52:13 | mirroring oracle database in pgsql |