RE: Keep needing to run manual analyze

From: Rob Northcott <Rob(dot)Northcott(at)compilator(dot)com>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: RE: Keep needing to run manual analyze
Date: 2021-01-07 09:22:25
Message-ID: AM8PR08MB584346EAD5935785C0C55BE79BAF0@AM8PR08MB5843.eurprd08.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

From: Michael Lewis <mlewis(at)entrata(dot)com>
Sent: 06 January 2021 18:11
To: Rob Northcott <Rob(dot)Northcott(at)compilator(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Keep needing to run manual analyze

On Wed, Jan 6, 2021 at 10:29 AM Rob Northcott <Rob(dot)Northcott(at)compilator(dot)com<mailto:Rob(dot)Northcott(at)compilator(dot)com>> wrote:
We have an application that uses a Postgres database (currently 9.6). All the autovacuum/analyze settings are just left at the defaults.
We’ve had a few instances recently where users have complained of slow performance and running a manual Analyze (not vacuum) on their database has resolved it.
Should it be considered normal that this needs to be done manually periodically even when autovacuum is on?
It’s not a very heavy application by a lot of people’s standards (it’s just a point-of-sale/stock-control system) but there are a couple of hundred databases on the server, all in fairly constant use through the day.

>You may need more workers, and a higher cost limit before work is paused for cost_delay. Depending how many tables per database >in the cluster, more workers would likely be ideal, or *maybe* a smaller naptime if there are tons of tables overall and all of them >are relatively small/see little changes.

>It really depends on your workload and *why* the tables aren't getting analyzed as frequently as you need. If your cost limit/delay >mean that the auto vacuum/analyze is rather throttled (and default settings would be that situation given today's I/O throughput on >any decent production machine), and you have some large tables with many large indexes are constantly in need of vacuuming and >you don't have sufficient maintenance work memory configured to avoid re-scanning the indexes repeatedly to get the work done... >you may never be getting around to the other tables. If you have a table that is (nearly) all inserts, then a periodic >vacuum/analyze done manually is prudent before PG13.

>Are you logging all auto vaccums/analyzes and able to run a pg badger or similar analysis on it? It would be helpful to see some stats >on what is going on currently.

Thanks for the tips. I don’t think it’s being logged unfortunately (but we could always turn it on if we need more info), but what you’ve said at least confirms that manual analyze shouldn’t be necessary. I’ll have to go and read up on autovacuum settings (it’s not something I’ve really looked into in detail before and just left the default settings, which look like they’re not doing what we need).

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Rob Northcott 2021-01-07 09:24:57 RE: Keep needing to run manual analyze
Previous Message Durumdara 2021-01-07 09:17:42 Onfly Query - cumulative sum the stock change values by articles