Re: Rearchitecting for storage

From: Matthew Pounsett <matt(at)conundrum(dot)com>
To: Rob Sargent <robjsargent(at)gmail(dot)com>
Cc: Kenneth Marshall <ktm(at)rice(dot)edu>, pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Rearchitecting for storage
Date: 2019-07-19 14:08:45
Message-ID: CAAiTEH8_VhMMD1pFu85qtsjUgXQYMxb2JW2aFLzL2j5J1Y=bhw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 18 Jul 2019 at 19:53, Rob Sargent <robjsargent(at)gmail(dot)com> wrote:

>
> >
> > That would likely keep the extra storage requirements small, but still
> non-zero. Presumably the upgrade would be unnecessary if it could be done
> without rewriting files. Is there any rule of thumb for making sure one
> has enough space available for the upgrade? I suppose that would come
> down to what exactly needs to get rewritten, in what order, etc., but the
> pg_upgrade docs don't seem to have that detail. For example, since we've
> got an ~18TB table (including its indices), if that needs to be rewritten
> then we're still looking at requiring significant extra storage. Recent
> experience suggests postgres won't necessarily do things in the most
> storage-efficient way.. we just had a reindex on that database fail (in
> --single-user) because 17TB was insufficient free storage for the db to
> grow into.
> >
> Can you afford to drop and re-create those 6 indices?

Technically, yes. I don't see any reason we'd be prevented from doing
that. But, rebuilding them will take a long time. That's a lot of
downtime to incur any time we update the DB. I'd prefer to avoid it if I
can. For scale, the recent 'reindex database' that failed ran for nine
days before it ran out of room, and that was in single-user. Trying to do
that concurrently would take a lot longer, I imagine.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Matthew Pounsett 2019-07-19 14:14:12 Re: Rearchitecting for storage
Previous Message Tiemen Ruiten 2019-07-19 11:46:45 very high replay_lag on 3-node cluster