From: | Perumal Raj <perucinci(at)gmail(dot)com> |
---|---|
To: | rihad <rihad(at)mail(dot)ru> |
Cc: | hjp-pgsql(at)hjp(dot)at, pgsql-general General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Recommendation to run vacuum FULL in parallel |
Date: | 2019-04-03 17:02:00 |
Message-ID: | CALvqh4qXSQ4nFG16hGzbEkPTfORih3tuOhyaT9P7RxVKdy6O6Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi All
Thanks for all your valuable inputs,
Here is some more data,
Though we have 150 GB free space spread across 500 Tables , Every
alternative day DB is growing with 1 GB rate.
Also,We have manual vacuum job scheduled to run weekly basis, So seems to
be space is not reusing all the time ?
So conclude the requirement here , The only way to parallelism is multiple
script. And no need to do REINDEX exclusively.
Question : Do we need to consider Table dependencies while preparing
script in order to avoid table locks during vacuum full ?
At present Maintenance work memory set to 20 GB.
Question : Do we need to tweak any other parameters ?
Note:
We are planning this activity with Application Downtime only.
Let me know if i missed anything.
Regards,
Raj
On Wed, Apr 3, 2019 at 8:42 AM rihad <rihad(at)mail(dot)ru> wrote:
> > And future updates can reuse it, too (an update is very similar to an
> > insert+delete).
>
>
> Hm, then it's strange our DB takes 6 times as much space compared to
> freshly restored one (only public schema is considered).
>
> > Not if autovacuum has a chance to run between updates.
>
> Ours is run regularly, although we had to tweak it down not to interfere
> with normal database activity, so it takes several hours each run on the
> table. We did that by setting autovacuum_vacuum_scale_factor = 0.05 from
> default 0.2.
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2019-04-03 18:44:25 | Re: Forcing index usage |
Previous Message | Michael Lewis | 2019-04-03 17:00:55 | Re: Forcing index usage |