From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Next Steps with Hash Indexes |
Date: | 2021-10-27 10:57:41 |
Message-ID: | CAA4eK1LnM1Sf6uFRM3XPrU0Lu_O7U6d02k+SDuS88XJD-NEwxw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Oct 27, 2021 at 2:32 AM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Tue, Oct 5, 2021 at 6:50 AM Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com> wrote:
> > With unique data, starting at 1 and monotonically ascending, hash
> > indexes will grow very nicely from 0 to 10E7 rows without causing >1
> > overflow block to be allocated for any bucket. This keeps the search
> > time for such data to just 2 blocks (bucket plus, if present, 1
> > overflow block). The small number of overflow blocks is because of the
> > regular and smooth way that splits occur, which works very nicely
> > without significant extra latency.
>
> It is my impression that with non-unique data things degrade rather
> badly.
>
But we will hold the bucket lock only for unique-index in which case
there shouldn't be non-unique data in the index. The non-unique case
should work as it works today. I guess this is the reason Simon took
an example of unique data.
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Dilip Kumar | 2021-10-27 11:09:09 | Re: pgsql: Document XLOG_INCLUDE_XID a little better |
Previous Message | Amit Kapila | 2021-10-27 10:37:55 | Re: Skipping logical replication transactions on subscriber side |