From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Bernd Helmle <mailings(at)oopsware(dot)de>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_verify_checksums failure with hash indexes |
Date: | 2018-09-04 05:19:17 |
Message-ID: | CAFiTN-tC3DeooA7KYi04AaaujFzOQQ-HzsH7KFZwqNbkcuLZLQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 4, 2018 at 10:14 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Mon, Sep 3, 2018 at 2:44 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>> On Mon, Sep 3, 2018 at 8:37 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> > On Sat, Sep 1, 2018 at 10:28 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>> >>
>> >> I think if we compute with below formula which I suggested upthread
>> >>
>> >> #define HASH_MAX_BITMAPS Min(BLCKSZ / 8, 1024)
>> >>
>> >> then for BLCKSZ 8K and bigger, it will remain the same value where it
>> >> does not overrun. And, for the small BLCKSZ, I think it will give
>> >> sufficient space for the hash map. If the BLCKSZ is 1K then the sizeof
>> >> (HashMetaPageData) + sizeof (HashPageOpaque) = 968 which is very close
>> >> to the BLCKSZ.
>> >>
>> >
>> > Yeah, so at 1K, the value of HASH_MAX_BITMAPS will be 128 as per above
>> > formula which is what it was its value prior to the commit 620b49a1.
>> > I think it will be better if you add a comment in your patch
>> > indicating the importance/advantage of such a formula.
>> >
>> I have added the comments.
>>
>
In my previous patch mistakenly I put Max(BLCKSZ / 8, 1024) instead of
Min(BLCKSZ / 8, 1024). I have fixed the same.
> Thanks, I will look into it. Can you please do some pg_upgrade tests
> to ensure that this doesn't impact the upgrade? You can create
> hash-index and populate it with some data in version 10 and try
> upgrading to 11 after applying this patch. You can also try it with
> different block-sizes.
>
Ok, I will do that.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
Attachment | Content-Type | Size |
---|---|---|
hash_overflow_fix_v2.patch | application/octet-stream | 1.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Kato, Sho | 2018-09-04 05:45:24 | RE: speeding up planning with partitions |
Previous Message | Pavan Deolasee | 2018-09-04 05:13:58 | Re: Accidental removal of a file causing various problems |