From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
---|---|
To: | Andy Fan <zhihuifan1213(at)163(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: cleanup patches for dshash |
Date: | 2024-01-22 03:51:18 |
Message-ID: | 20240122035118.GA1774330@nathanxps13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jan 22, 2024 at 10:28:42AM +0800, Andy Fan wrote:
> Both LGTM.
Thanks for looking.
> +dshash_strcmp(const void *a, const void *b, size_t size, void *arg)
> +{
> + Assert(strlen((const char *) a) < size);
> + Assert(strlen((const char *) b) < size);
> +
>
> Do you think the below change will be more explicitly?
>
> #define DSMRegistryNameSize 64
>
> DSMRegistryEntry
> {
> char name[DSMRegistryNameSize];
>
> }
>
> Assert(strlen((const char *) a) < DSMRegistryNameSize);
This is effectively what it's doing already. These functions are intended
to be generic so that they could be used with other dshash tables with
string keys, possibly with different sizes.
I did notice a cfbot failure [0]. After a quick glance, I'm assuming this
is caused by the memcpy() in insert_into_bucket(). Even if the string is
short, it will copy the maximum key size, which is bad. So, 0002 is
totally broken at the moment, and we'd need to teach insert_into_bucket()
to strcpy() instead for string keys to fix it. Perhaps we could add a
field to dshash_parameters for this purpose...
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Smith | 2024-01-22 04:11:43 | Re: Avoid computing ORDER BY junk columns unnecessarily |
Previous Message | shveta malik | 2024-01-22 03:36:07 | Re: Synchronizing slots from primary to standby |