From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | "Bruce Momjian" <bruce(at)momjian(dot)us> |
Cc: | "Daniel Farina" <daniel(at)heroku(dot)com>,"Greg Stark" <stark(at)mit(dot)edu>, "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_upgrade and statistics |
Date: | 2012-03-13 19:48:37 |
Message-ID: | 4F5F5E450200002500046254@gw.wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian <bruce(at)momjian(dot)us> wrote:
>>>> cir=# analyze "CaseHist";
>>>> ANALYZE
>>>> Time: 143450.467 ms
> OK, so a single 44GB tables took 2.5 minutes to analyze; that is
> not good. It would require 11 such tables to reach 500GB (0.5
> TB), and would take 27 minutes. The report I had was twice as
> long, but still in the ballpark of "too long". :-(
We have a sister machine to the one used for that benchmark -- same
hardware and database. The cost limit didn't seem to make much
difference:
cir=# set vacuum_cost_delay = 0;
SET
cir=# \timing on
Timing is on.
cir=# analyze "CaseHist" ;
ANALYZE
Time: 146169.728 ms
So it really does seem to take that long.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2012-03-13 19:52:45 | Re: Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation) |
Previous Message | Robert Haas | 2012-03-13 19:48:32 | Re: wal_buffers, redux |