From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Heikki Linnakangas <heikki(at)enterprisedb(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Partial vacuum versus pg_class.reltuples |
Date: | 2009-06-07 20:19:30 |
Message-ID: | 18162.1244405970@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Sun, Jun 7, 2009 at 3:24 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> [ thinks a bit and reads the code some more ... ] There is a
>> considerably safer alternative, which is to let ANALYZE update the
>> reltuples estimate based on the pages it sampled; which should be a
>> considerably less biased sample than the pages a partial vacuum would
>> have looked at. And we have already got the code doing that, either
>> in a standalone ANALYZE or an ANALYZE attached to a vacuum that turned
>> out to be partial.
> I'm confused with how this squares with the previous discussion - I
> thought you observed a case where this wasn't happening.
No, the problem was that the wrong things were getting done to reltuples
entries for indexes. The heap reltuples values were correct, but there
is code that takes the heap value into account when estimating the value
for an index, and that was all wet because it was using the number of
tuples on the scanned pages rather than any total-table estimate.
>>> Do we have any reasonable manual way of forcing
>>> VACUUM to scan the entire heap?
>>
>> You could use VACUUM FREEZE, for instance.
> That'll generate a fair amount of I/O.
Er, isn't that exactly what you asked for?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2009-06-07 20:48:59 | Re: information_schema.columns changes needed for OLEDB |
Previous Message | Robert Haas | 2009-06-07 20:08:32 | Re: Partial vacuum versus pg_class.reltuples |