From: | Justin Pasher <justinp(at)newmediagateway(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Postgres stats collector showing high disk I/O |
Date: | 2010-04-23 22:46:16 |
Message-ID: | 4BD22338.2000307@newmediagateway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
----- Original Message -----
From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Date: Fri, 23 Apr 2010 18:28:03 -0400
Subject: Re: [GENERAL] Postgres stats collector showing high disk I/O
To: Justin Pasher <justinp(at)newmediagateway(dot)com>
CC: depesz(at)depesz(dot)com, pgsql-general(at)postgresql(dot)org
> Justin Pasher wrote:
>
>
>> Agh... I used pg_stats_reset (with an s) when searching for it. I
>> ran the function and it returned true, but the stats file only
>> shrunk by ~100k (still over 18MB total). Is there something else I
>> need to do? Does this mean the file is mostly bloated with bogus
>> data that it can't "reset"? I'm guessing I should just try to delete
>> the file outright?
>>
>
> Err, yeah, pg_stat_reset only resets the stats for the current database.
> You need to reset for all databases, or alternatively, shut down the
> server, remove the file, and restart
>
Ahh, yes. I probably should have realized that. I ran the function on
all of the databases (138 total), and now the stats file is down to
~400k. The disk I/O is also practically nothing now.
So now as a continuation of my original message, what would cause the
stats file to get so big for what seems like (IMO) a small number of
databases? I have a Postgres 7.4 cluster that has about 250 databases,
but it's stats file is only 3.5MB. Do I need to look into avoiding a
bunch of CREATE/DROP statements (a suggestion in the archives)? I don't
know the actual usage patterns of the code base (I'll have to get with
the developers), but I think at worst, they might create temp tables
from time to time. As a matter of fact, I just checked the stats file
again (about 10 minutes later) and it's doubled to 800K. Is Postgres
just trying to store too much information in the statistics file?
Ultimately, the main statistics I care about are current connections and
queries being run.
A previous post in the archives from Tom said that vacuum (even
autovacuum) should clean up potential stat file bloat. Do I need to
tweak my autovacuum settings? Everything is currently set to the
defaults because I've never had any performance issues that warranted
tweaking the settings.
--
Justin Pasher
From | Date | Subject | |
---|---|---|---|
Next Message | Ben Chobot | 2010-04-24 00:52:06 | how to invalidate a stored procedure's plan? |
Previous Message | Tom Lane | 2010-04-23 22:38:03 | Re: Postgres stats collector showing high disk I/O |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-04-23 22:46:52 | Re: recovery_connections cannot start (was Re: master in standby mode croaks) |
Previous Message | Tom Lane | 2010-04-23 22:42:34 | Re: recovery_connections cannot start (was Re: master in standby mode croaks) |