From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Inaccurate results from numeric ln(), log(), exp() and pow() |
Date: | 2015-11-13 23:10:39 |
Message-ID: | 8680.1447456239@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com> writes:
> On 13 November 2015 at 18:36, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Seems like we arguably should do this whenever the weight isn't zero,
>> so as to minimize the number of sqrt() steps.
> It's a bit arbitrary. There is a tradeoff here -- computing ln(10) is
> more expensive than doing a sqrt() since the Babylonian algorithm used
> for sqrt() is quadratically convergent, whereas the Taylor series for
> ln() converges more slowly. At the default precision, ln(10) is around
> 7 times slower than sqrt() on my machine, although that will vary with
> precision, and the sqrt()s increase the local rscale and so they will
> get slower. Anyway, it seemed reasonable to not do the extra ln()
> unless it was going to save at least a couple of sqrt()s.
OK --- I think I miscounted how many sqrt's we could expect to save.
One more thing: the approach you used in power_var() of doing a whole
separate exp * ln(base) calculation to approximate the result weight
seems mighty expensive, even if it is done at minimal precision.
Couldn't we get a good-enough approximation using basically
numericvar_to_double_no_overflow(exp) * estimate_ln_weight(base) ?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2015-11-13 23:13:52 | Re: check for interrupts in set_rtable_names |
Previous Message | Jeff Janes | 2015-11-13 22:51:08 | check for interrupts in set_rtable_names |