From: | AP <pgsql(at)inml(dot)weebeastie(dot)net> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: 10.1: hash index size exploding on vacuum full analyze |
Date: | 2017-11-16 04:30:11 |
Message-ID: | 20171116043011.n7vcw4iuufzi3uyu@inml.weebeastie.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Thu, Nov 16, 2017 at 09:48:13AM +0530, Amit Kapila wrote:
> On Thu, Nov 16, 2017 at 4:59 AM, AP <pgsql(at)inml(dot)weebeastie(dot)net> wrote:
> > I've some tables that'll never grow so I decided to replace a big index
> > with one with a fillfactor of 100. That went well. The index shrunk to
> > 280GB. I then did a vacuum full analyze on the table to get rid of any
> > cruft (as the table will be static for a long time and then only deletes
> > will happen) and the index exploded to 701GB. When it was created with
> > fillfactor 90 (organically by filling the table) the index was 309GB.
>
> Sounds quite strange. I think during vacuum it leads to more number
> of splits than when the original data was loaded. By any chance do
> you have a copy of both the indexes (before vacuum full and after
> vacuum full)? Can you once check and share the output of
> pgstattuple-->pgstathashindex() and pageinspect->hash_metapage_info()?
> I wanted to confirm if the bloat is due to additional splits.
I'll see what I can do. Currently vacuuming the table without the index
so that I can then do a create index concurrently and get back my 280GB
index (it's how I got it in the first place). Namely:
create index concurrently on ... using hash (datum) with ( fillfactor = 100 );
I've got more similar tables, though.
AP
From | Date | Subject | |
---|---|---|---|
Next Message | AP | 2017-11-16 05:02:03 | Re: 10.1: hash index size exploding on vacuum full analyze |
Previous Message | Amit Kapila | 2017-11-16 04:18:13 | Re: 10.1: hash index size exploding on vacuum full analyze |