Re: Invalid indexes should not consume update overhead

From: Peter Geoghegan <pg(at)heroku(dot)com>
To: "Rader, David" <davidr(at)openscg(dot)com>
Cc: Tomasz Ostrowski <tometzky+pg(at)ato(dot)waw(dot)pl>, PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org>, Greg Stark <stark(at)mit(dot)edu>
Subject: Re: Invalid indexes should not consume update overhead
Date: 2016-07-17 20:59:12
Message-ID: CAM3SWZRKUqdPeh5aGGg6BydBWxFL+fySmx5xdoWzCUBCCD5W2Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Sun, Jul 17, 2016 at 1:42 PM, Rader, David <davidr(at)openscg(dot)com> wrote:
> For example, in SQL Server you can "alter index disable". If you are about
> to do a lot of bulk operations. But there is no "re-enable"; instead you
> have to "alter index rebuild" because as has been said on this thread you
> don't know what has changed since the disable.
>
> Basically this is very similar to dropping and recreating indexes around
> bulk loads/updates.

That seems pretty pointless. Why not actually drop the index, then?

The only reason I can think of is that there is value in representing
that indexes should continue to have optimizer statistics (that would
happen for expression indexes in Postgres) without actually paying for
the ongoing maintenance of the index during write statements. Even
that seems like kind of a stretch, though.

--
Peter Geoghegan

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Rader, David 2016-07-17 21:06:10 Re: Invalid indexes should not consume update overhead
Previous Message Rader, David 2016-07-17 20:42:27 Re: Invalid indexes should not consume update overhead