From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | John Naylor <johncnaylorls(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Gurjeet Singh <gurjeet(at)singh(dot)im>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Change GUC hashtable to use simplehash? |
Date: | 2023-12-08 20:34:59 |
Message-ID: | 456e1822959638b93af779582cc7c2cbc0a178ca.camel@j-davis.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I committed 867dd2dc87, which means my use case for a fast GUC hash
table (quickly setting proconfigs) is now solved.
Andres mentioned that it could still be useful to reduce overhead in a
few other places:
https://postgr.es/m/20231117220830.t6sb7di6h6am4ep5@awork3.anarazel.de
How should we evaluate GUC hash table performance optimizations? Just
microbenchmarks, or are there end-to-end tests where the costs are
showing up?
(As I said in another email, I think the hash function APIs justify
themselves regardless of improvements to the GUC hash table.)
On Wed, 2023-12-06 at 07:39 +0700, John Naylor wrote:
> > There's already a patch to use simplehash, and the API is a bit
> > cleaner, and there's a minor performance improvement. It seems
> > fairly
> > non-controversial -- should I just proceed with that patch?
>
> I won't object if you want to commit that piece now, but I hesitate
> to
> call it a performance improvement on its own.
>
> - The runtime measurements I saw reported were well within the noise
> level.
> - The memory usage starts out better, but with more entries is worse.
I suppose I'll wait until there's a reason to commit it, then.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Verite | 2023-12-08 20:35:39 | Re: Emitting JSON to file using COPY TO |
Previous Message | Jeff Davis | 2023-12-08 20:32:27 | Re: Change GUC hashtable to use simplehash? |