From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Greg Stark <stark(at)mit(dot)edu>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum: allow usage of more than 1GB of work mem |
Date: | 2016-09-14 17:40:00 |
Message-ID: | CANP8+j+L0gVnzEdy8swUWE1YSxtXighB354RK4c5KjY=pMFw0A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 14 September 2016 at 11:19, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> wrote:
>> In
>> theory we could even start with the list of TIDs and switch to the
>> bitmap if the TID list becomes larger than the bitmap would have been,
>> but I don't know if it's worth the effort.
>>
>
> Yes, that works too. Or may be even better because we already know the
> bitmap size requirements, definitely for the tuples collected so far. We
> might need to maintain some more stats to further optimise the
> representation, but that seems like unnecessary detailing at this point.
That sounds best to me... build the simple representation, but as we
do maintain stats to show to what extent that set of tuples is
compressible.
When we hit the limit on memory we can then selectively compress
chunks to stay within memory, starting with the most compressible
chunks.
I think we should use the chunking approach Robert suggests, though
mainly because that allows us to consider how parallel VACUUM should
work - writing the chunks to shmem. That would also allow us to apply
a single global limit for vacuum memory rather than an allocation per
VACUUM.
We can then scan multiple indexes at once in parallel, all accessing
the shmem data structure.
We should also find the compression is better when we consider chunks
rather than the whole data structure at once.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Pavan Deolasee | 2016-09-14 17:40:09 | Re: Vacuum: allow usage of more than 1GB of work mem |
Previous Message | Alvaro Herrera | 2016-09-14 17:23:54 | Re: Vacuum: allow usage of more than 1GB of work mem |