From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Aggregate "rollup" |
Date: | 2003-03-06 04:10:26 |
Message-ID: | 87k7fd6sp9.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
mlw <pgsql(at)mohawksoft(dot)com> writes:
> I had written a piece of code about two years ago that used the aggregate
> feature of PostgreSQL to create an array of integers from an aggregate, as:
>
> select int_array_aggregate( column ) from table group by column
I found this and am using it extensively. It's extremely helpful, thank you.
It goes well with either the *= operators in contrib/array or the gist
indexing in contrib/intarray.
One problem I've found though is that the optimizer doesn't have any good
statistics for estimating the number of matches of such operators. It seems
like fixing that would require a lot of changes to the statistics gathered.
> create table fast_lookup as select reference, aggregate_array( field ) from
> table group by field
>
> Where the function aggregate_array takes any number of data types.
Sure, that seems logical. Actually I already bumped into a situation where I
wanted an array of char(1). I just kludged it to use ascii() of that first
character, but it would be cleaner and perhaps better for unicode later to use
the actual character.
Someone else on the list already asked for an function that gave an array of
varchar. I think they were pointed at a general purpose function from plr.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Clift | 2003-03-06 04:23:18 | Re: Who puts the Windows binaries on the FTP server? |
Previous Message | Alan Gutierrez | 2003-03-06 03:53:35 | Re: XML ouput for psql |