From: | hubert depesz lubaczewski <depesz(at)depesz(dot)com> |
---|---|
To: | Artem Tomyuk <admin(at)leboutique(dot)com> |
Cc: | Keith <keith(at)keithf4(dot)com>, pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: how to shrink pg_attribute table in some database |
Date: | 2018-03-26 15:24:25 |
Message-ID: | 20180326152425.GA23643@depesz.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Mon, Mar 26, 2018 at 05:33:19PM +0300, Artem Tomyuk wrote:
> Can't, it generates huge IO spikes.
>
> But....
>
> Few hours ago i manually started vacuum verbose on pg_attribute, now its
> finished and i have some outputs:
>
> INFO: "pg_attribute": found 554728466 removable, 212058 nonremovable row
> versions in 44550921 out of 49326696 pages DETAIL: 178215 dead row versions
> cannot be removed yet. There were 53479 unused item pointers. 0 pages are
> entirely empty. CPU 1097.53s/1949.50u sec elapsed 6337.86 sec. Query
> returned successfully with no result in 01:47:3626 hours.
>
> what do you think?
>
> select count(*) on pg_attribute returns:
> 158340 rows
>
> So as i understand vacuum full will create new pg_attribute and will wrote
> those amount of "valid" rows, but still it will scan 300GB old table?
> So estimate will be even ~same compering with regular vacuum?
more or less, yes.
the thing is - find and fix whatever is causing this insane churn of
tables/attributes.
Best regards,
depesz
From | Date | Subject | |
---|---|---|---|
Next Message | Artem Tomyuk | 2018-03-26 15:32:25 | Re: how to shrink pg_attribute table in some database |
Previous Message | Artem Tomyuk | 2018-03-26 14:33:19 | Re: how to shrink pg_attribute table in some database |