From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Anastasia Lubennikova <lubennikovaav(at)gmail(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum: allow usage of more than 1GB of work mem |
Date: | 2017-04-08 02:19:42 |
Message-ID: | CAGTBQpbhTBKE7bCfessQuUwZm6SBdNYV6hq0GrEu3g+oWXAuMw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Apr 7, 2017 at 10:06 PM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
>>> >> + if (seg->num_dead_tuples >= seg->max_dead_tuples)
>>> >> + {
>>> >> + /*
>>> >> + * The segment is overflowing, so we must allocate a new segment.
>>> >> + * We could have a preallocated segment descriptor already, in
>>> >> + * which case we just reinitialize it, or we may need to repalloc
>>> >> + * the vacrelstats->dead_tuples array. In that case, seg will no
>>> >> + * longer be valid, so we must be careful about that. In any case,
>>> >> + * we must update the last_dead_tuple copy in the overflowing
>>> >> + * segment descriptor.
>>> >> + */
>>> >> + Assert(seg->num_dead_tuples == seg->max_dead_tuples);
>>> >> + seg->last_dead_tuple = seg->dt_tids[seg->num_dead_tuples - 1];
>>> >> + if (vacrelstats->dead_tuples.last_seg + 1 >= vacrelstats->dead_tuples.num_segs)
>>> >> + {
>>> >> + int new_num_segs = vacrelstats->dead_tuples.num_segs * 2;
>>> >> +
>>> >> + vacrelstats->dead_tuples.dt_segments = (DeadTuplesSegment *) repalloc(
>>> >> + (void *) vacrelstats->dead_tuples.dt_segments,
>>> >> + new_num_segs * sizeof(DeadTuplesSegment));
>>> >
>>> > Might be worth breaking this into some sub-statements, it's quite hard
>>> > to read.
>>>
>>> Breaking what precisely? The comment?
>>
>> No, the three-line statement computing the new value of
>> dead_tuples.dt_segments. I'd at least assign dead_tuples to a local
>> variable, to cut the length of the statement down.
>
> Ah, alright. Will try to do that.
Attached is an updated patch set with the requested changes.
Segment allocation still follows the exponential strategy, and segment
lookup is still linear.
I rebased the early free patch (patch 3) to apply on top of the v9
patch 2 (it needed some changes). I recognize the early free patch
didn't get nearly as much scrutiny, so I'm fine with commiting only 2
if that one's ready to go but 3 isn't.
If it's decided to go for fixed 128M segments and a binary search of
segments, I don't think I can get that ready and tested before the
commitfest ends.
Attachment | Content-Type | Size |
---|---|---|
0002-Vacuum-allow-using-more-than-1GB-work-mem-v9.patch | text/x-patch | 23.3 KB |
0003-Vacuum-free-dead-tuples-array-as-early-as-possible-v2.patch | text/x-patch | 2.6 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2017-04-08 02:23:31 | Re: Performance improvement for joins where outer side is unique |
Previous Message | Bruce Momjian | 2017-04-08 01:52:01 | Re: [HACKERS] Small issue in online devel documentation build |