From: | Pete Forman <pete(dot)forman(at)westerngeco(dot)com> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | Chris Jones <chris(at)mt(dot)sri(dot)com>, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: vacuum analyze again... |
Date: | 2001-02-21 08:53:58 |
Message-ID: | 14995.33318.905195.486936@kryten.bedford.waii.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Bruce Momjian writes:
> > Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
> >
> > > No, we have no ability to randomly pick rows to use for
> > > estimating statistics. Should we have this ability?
> >
> > That would be really slick, especially given the fact that VACUUM
> > runs much faster than VACUUM ANALYZE for a lot of PG users. I
> > could change my daily maintenance scripts to do a VACUUM of
> > everything, followed by a VACUUM ANALYZE of the small tables,
> > followed by a VACUUM ANALYZE ESTIMATE (or whatever) of the large
> > tables.
> >
> > Even cooler would be the ability to set a table size threshold,
> > so that VACUUM ANALYZE would automatically choose the appropriate
> > method based on the table size.
>
> Added to TODO:
>
> * Allow ANALYZE to process a certain random precentage of
> rows
Does this reduced analysis need to be random? Why not allow the DBA
to specify what rows or blocks to do in some way.
--
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco -./\.- by myself and does not represent
pete(dot)forman(at)westerngeco(dot)com -./\.- opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef -./\.- Hughes or their divisions.
From | Date | Subject | |
---|---|---|---|
Next Message | Grigoriy G. Vovk | 2001-02-21 09:02:44 | Re: two tables - foreign keys referring to each other... |
Previous Message | Andrey Y. Mosienko | 2001-02-21 08:48:47 | How to release SET() in PgSQL? |