From: | Vasilis Ventirozos <v(dot)ventirozos(at)gmail(dot)com> |
---|---|
To: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: postgresql9.4 aws - no pg_upgrade |
Date: | 2017-11-02 21:27:10 |
Message-ID: | F96FA87E-9234-4788-9B5D-FA3159FED7D4@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
> On 2 Nov 2017, at 23:03, bala jayaram <balajayaram22(at)gmail(dot)com> wrote:
>
> Hi Team,
>
>
> We tried in production, pg_upgrade works well. But running vacuumdb , resulted in huge spike in CPU, system halted. Is there a way to fasten or parallel vacuum solution for faster recovery after pg_upgrade.
>
> Our database size is around 500GB, contains multiple databases, huge records. What is the minimum way to do a vacuuming after pg_upgrade? This is for migration from 9.3 to 9.4.
All you need to do right after the upgrade is getting new statistics by running "analyze" or by doing something like vacuumdb -a -v -z.
That should take a while but it shouldn't "halt" anything. I believe that 9.4 doesn't have -j in vacuumdb, so you can script
something that will will get all tables, split them and run each part in X number of psqls.
When you are done with the statistics then scheduling a vacuum would be a good idea. this can be done during any convenient
time or you can just split the work using a script.
Regards,
Vasilis Ventirozos
From | Date | Subject | |
---|---|---|---|
Next Message | bala jayaram | 2017-11-03 02:41:40 | Re: postgresql9.4 aws - no pg_upgrade |
Previous Message | Arjun Ranade | 2017-11-02 21:17:43 | Re: Missing Chunk Error when doing a VACUUM FULL operation - DB Corruption? |