From: | "Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com> |
---|---|
To: | "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, pgsql-patches(at)postgresql(dot)org |
Subject: | Re: HOT WIP Patch - version 3.2 |
Date: | 2007-02-27 18:23:47 |
Message-ID: | 2e78013d0702271023q3b3fe39bja9ef7dbcf2c627e4@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-patches |
On 2/27/07, Heikki Linnakangas <heikki(at)enterprisedb(dot)com> wrote:
>
> Pavan Deolasee wrote:
> > - What do we do with the LP_DELETEd tuples at the VACUUM time ?
> > In this patch, we are collecting them and vacuuming like
> > any other dead tuples. But is that the best thing to do ?
>
> Since they don't need index cleanups, it's a waste of
> maintenance_work_mem to keep track of them in the dead tuples array.
> Let's remove them in the 1st phase. That means trading the shared lock
> for a vacuum-level lock on pages with LP_DELETEd tuples. Or if we want
> to get fancy, we could skip LP_DELETEd tuples in the 1st phase for pages
> that had dead tuples on them, and scan and remove them in the 2nd phase
> when we have to acquire the vacuum-level lock anyway.
I liked the idea of not collecting the LP_DELETEd tuples in the first
pass. We also prune the HOT-update chains in the page in the first
pass, may be that can also be moved to second pass. We need to
carefully work on the race conditions involved in the VACUUM, pruning
and tuple reuse though.
> - While searching for a LP_DELETEd tuple, we start from the
> > first offset and return the first slot which is big enough
> > to store the tuple. Is there a better search algorithm
> > (sorting/randomizing) ? Should we go for best-fit instead
> > of first-fit ?
>
> Best-fit seems better to me. It's pretty cheap to scan for LP_DELETEd
> line pointers, but wasting space can lead to cold updates and get much
> more expensive.
Ok. I will give it a shot once the basic things are ready.
You could also prune the chains on the page to make room for the update,
> and if you can get a vacuum lock you can also defrag the page.
Yes, thats a good suggestion as well. I am already doing that in the
patch I am working on right now.
> - Should we have metadata on the heap page to track the
> > number of LP_DELETEd tuples, number of HOT-update chains in the
> > page and any other information that can help us optimize
> > search/prune operations ?
>
> I don't think the CPU overhead is that significant; we only need to do
> the search/prune when a page gets full. We can add flags later if we
> feel like it, but let's keep it simple for now.
I am making good progress with the line-pointer redirection stuff.
Its showing tremendous value in keeping the table and index size
in control. But we need to check for the CPU overhead as well
and if required optimize there.
Thanks,
Pavan
--
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2007-02-27 18:29:00 | Re: Seeking Google SoC Mentors |
Previous Message | Gregory Stark | 2007-02-27 18:21:37 | Re: bug in gist hstore? |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2007-02-27 19:10:22 | Re: [PATCHES] BUG #2969: Inaccuracies in Solaris FAQ |
Previous Message | Jim C. Nasby | 2007-02-27 17:06:13 | Re: Dead Space Map version 2 |