From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Hash Indexes |
Date: | 2016-09-20 15:20:35 |
Message-ID: | 20160920152035.GA18263@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Sep 15, 2016 at 11:11:41AM +0530, Amit Kapila wrote:
> I think it is possible without breaking pg_upgrade, if we match all
> items of a page at once (and save them as local copy), rather than
> matching item-by-item as we do now. We are already doing similar for
> btree, refer explanation of BTScanPosItem and BTScanPosData in
> nbtree.h.
FYI, pg_upgrade has code to easily mark indexes as invalid and create a
script the use can run to recreate the indexes as valid. I have
received no complaints when this was used.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2016-09-20 15:25:11 | Re: "Some tests to cover hash_index" |
Previous Message | Peter Eisentraut | 2016-09-20 15:18:57 | Re: more parallel query documentation |