From: | Florian Pflug <fgp(at)phlo(dot)org> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, Greg Stark <stark(at)mit(dot)edu>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PATCH] Negative Transition Aggregate Functions (WIP) |
Date: | 2014-01-18 01:20:26 |
Message-ID: | 289425F7-3B1D-4A6A-A9B5-5648EE2BBBAC@phlo.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
First, I've go the feeling that I should somehow update the commitfest app,
but I don't really know in which way. Should I put myself in as a reviewer,
or as a second author? Or neither? Suggestions welcome...
On Jan17, 2014, at 23:34 , David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> The test turned out to become:
> if (state->expectedScale == -1)
> state->expectedScale = X.dscale;
> else if (state->expectedScale != X.dscale)
> state->expectedScale = -2;
>
> In do_numeric_accum, then when we do the inverse I just check if
> expectedScale < 0 then return NULL.
Ok, so this will rescan if and only if the dscales of all inputs match.
I still that's overly pessimistic - we've only got a problem when we
removed the input with the largest dscale, no? So my suggestion would be
state->maxDScale = MAX(X.dscale, state->maxDScale);
in do_numeric_accum, and in the inverse
if (state->maxDScane == X.dscale)
return PG_RETURN_NULL;
I'd think that this avoids more restarts without about the same effort,
but I haven't tried this though, so maybe I'm missing something.
> I'm not set on it, and I'm willing to try the lazy zeroing of the scale
> tracker array, but I'm just not quite sure what extra real use cases it
> covers that the one above does not. Perhaps my way won't perform inverse
> transitions if the query did sum(numericCol / 10) or so.
Dunno how many people SUM over numerics with different dscales. Easily
possible that it's few enough to not be worth fussing over.
> create table num (num numeric(10,2) not null);
> insert into num (num) select * from generate_series(1,20000);
> select sum(num) over(order by num rows between current row and unbounded following) from num; -- 124ms
> select sum(num / 10) over(order by num rows between current row and unbounded following) from num; -- 254ms
> select sum(num / 1) over(order by num rows between current row and unbounded following) from num; -- 108156.917 ms
>
> The divide by 1 case is slow because of that weird 20 trailing zero
> instead of 16 when dividing a numeric by 1 and that causes the inverse
> transition function to return NULL because the scale changes.
>
> I've not tested an unpatched version yet to see how that divide by 1 query
> performs on that but I'll get to that soon.
So without the patch, all three queries should perform simiarly, i.e. take
about 10 seconds, right? If so, the improvement is fantastic!
> I'm thinking that the idea about detecting the numeric range with floating
> point types and performing an inverse transition providing the range has
> not gone beyond some defined danger zone is not material for this patch...
> I think it would be not a great deal of work to code, but the testing involved
> would probably make this patch not possible for 9.4
Yeah, I never imagined that this would happen for 9.4.
> The latest version of the patch is attached.
OK, there are a few more comments
* build_aggregate_fnexprs() should allow NULL to be passed for invtransfn_oid,
I think. I don't quite like that we construct that even for plain aggregates,
and avoiding requires just an additional if.
* Don't we need to check for volatile function in the filter expression too?
* As it stands, I don't think intXand_inv and intXor_inv are worth it, since
the case where they return non-NULL is awefully slim (only for inputs
containing only 1 respectively only zeros). We should either track the number
of zeros and ones per bit, which would allow us to always perform inverse
transitions, or just drop them.
* Quite a few of the inverse transition functions are marked strict, yet
contain code to handle NULL inputs. You can just remove that code - the system
makes sure that strict functions never receive NULL arguments. Affected are,
AFAICS numeric_accum_inv, numeric_avg_accum_inv, int2_accum_inv,
int4_accum_inv, int8_accum_inv, int8_avg_accum_inv, int2_sum_inv, int4_sum_inv,
int8_sum_inv. Not sure that list is exhaustive...
* For any of the new inverse transition functions, I'd be inclined to just
elog() if they're called directly and not as an aggregate. In particular
those which check for that anyway, plus the *smaller_inv and *larger_inv
ones. I don't see why anyone would ever want to call these directly - and
if we elog() now, we can easy change these functions later, because no external
code can depend on them. E.g., maybe someone wants to keep the top N elements
in the MIN/MAX aggregates one day...
* The number of new regression tests seems a bit excessive. I don't think there
really a policy what to test and what not, but in this case I think it suffices
if we test the basic machinery, plus the more complex functions. But e.g. most
of the SUM and AVG aggregates use numeric_accum or numeric_avg_accum internally,
and the wrapper code basically just does datatype conversion, so testing a few
cases seems enough there. What I think we *should* test, but don't do currently,
is whether the core machinery performs the expected calls of the forward and
reverse transition function. I was thinking about creating an aggregate in the
regression tests which simply concatenates all the calls into a string, e.g.
you might get "F:1 F:2 F:3 I:1" if we aggregated 1,2,3 and then removed 1.
I think that should be possible with an SQL-language forward and inverse
transfer function, but I haven't tried. I can try, if you want.
best regards,
Florian Pflug
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-01-18 01:23:27 | Re: Add %z support to elog/ereport? |
Previous Message | Tom Lane | 2014-01-18 00:42:35 | Re: Add %z support to elog/ereport? |