From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum: allow usage of more than 1GB of work mem |
Date: | 2016-09-07 15:12:16 |
Message-ID: | CAM-w4HNcUmaez3sG7W+Xp9gZsCH5T7JFuhtSSi_-_A97VHv4ag@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 6 September 2016 at 19:59, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>> The idea of looking to the stats to *guess* about how many tuples are
>> removable doesn't seem bad at all. But imagining that that's going to be
>> exact is folly of the first magnitude.
>
> Yes. Bear in mind I had already referred to allowing +10% to be safe,
> so I think we agree that a reasonably accurate, yet imprecise
> calculation is possible in most cases.
That would all be well and good if it weren't trivial to do what
Robert suggested. This is just a large unsorted list that we need to
iterate throught. Just allocate chunks of a few megabytes and when
it's full allocate a new chunk and keep going. There's no need to get
tricky with estimates and resizing and whatever.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2016-09-07 15:38:10 | Re: Fun fact about autovacuum and orphan temp tables |
Previous Message | Ildar Musin | 2016-09-07 15:09:51 | Re: Index Onlys Scan for expressions |