From: | "Joel Jacobson" <joel(at)compiler(dot)org> |
---|---|
To: | "Dean Rasheed" <dean(dot)a(dot)rasheed(at)gmail(dot)com>, "Michael Paquier" <michael(at)paquier(dot)xyz> |
Cc: | "Aaron Altman" <aaronaltman(at)posteo(dot)net>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Optimize numeric.c mul_var() using the Karatsuba algorithm |
Date: | 2024-06-30 15:17:44 |
Message-ID: | 3b1af857-c372-473b-ba21-5c1586e4a3d9@app.fastmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Jun 29, 2024, at 14:22, Dean Rasheed wrote:
> However, I really don't like having these magic constants at all,
> because in practice the threshold above which the Karatsuba algorithm
> is a win can vary depending on a number of factors, such as whether
> it's running on 32-bit or 64-bit, whether or not SIMD instructions are
> available, the relative timings of CPU instructions, the compiler
> options used, and probably a bunch of other things.
...
> Doing a quick test on my machine, using random equal-length inputs of
> various sizes, I got the following performance results:
>
> digits | rate (HEAD) | rate (patch) | change
> --------+---------------+---------------+--------
> 10 | 6.060014e+06 | 6.0189365e+06 | -0.7%
> 100 | 2.7038752e+06 | 2.7287925e+06 | +0.9%
Does the PostgreSQL community these days have access to some kind
of performance farm, covering some/all of the supported hardware architectures?
Personally, I have three machines:
MacBook Pro M3 Max
Intel Core i9-14900K
AMD Ryzen 9 7950X3D
In addition I usually spin up a few AWS instances of different types,
but this is scary, because one time I forgot to turn them off for a week,
which was quite costly.
Would be much nicer with a performance farm!
If one exists, please let me know and no need to read the rest of this email.
Otherwise:
Imagine if we could send a patch to a separate mailing list,
and the system would auto-detect what catalog functions are affected,
and automatically generate a performance report, showing the delta per platform.
Binary functions, like numeric_mul(), should generate an image where the two
axes would be the size of the inputs, and the color of each pixel should show
the performance gain/loss, whereas unary functions like sqrt() should have
the size of the input as the x-axis and performance gain/loss as the y-axis.
How to test each catalog function would of course need to be designed
manually, but maybe the detection of affected function would be
automated, if accepting some false positives/negatives, i.e. benchmarking
too many or too few catalog functions, given a certain patch.
Catalog functions are just a tiny part of PostgreSQL, so there should
of course be other tests as well to cover other things, but since they are
simple to test predictably, maybe it could be a good start for the project,
even if it's far from the most important thing to benchmark.
I found an old performance farm topic from 2012 but it seems the discussion
just stopped for some reason not clear to me.
Regards,
Joel
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-06-30 15:23:52 | Re: Linux likely() unlikely() for PostgreSQL |
Previous Message | Matthias van de Meent | 2024-06-30 14:47:02 | Re: Linux likely() unlikely() for PostgreSQL |