From: | Rafal Pietrak <rafal(at)zorro(dot)isa-geek(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: An aggregate function on ARRAY |
Date: | 2010-08-12 12:47:27 |
Message-ID: | 1281617247.4673.15.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 2010-08-11 at 09:53 -0400, Merlin Moncure wrote:
> On Wed, Aug 11, 2010 at 8:42 AM, Rafal Pietrak <rafal(at)zorro(dot)isa-geek(dot)com> wrote:
[....]
> >
> > SELECT min(A[1]) as a1, min(A[2]) as a2, ...
> >
> > This is because aggregate functions are not defined on ARRAY types. Or
> > may be there is an easier and more readable way to do that?
>
> If you have a fixed number of elements across the entire table, you
I don't. Although I may try to constriant the problem to that if I
assume maximum size of the array.
> can accomplish what I think you are trying to do by expanding all the
> arrays in the table and regrouping based on generate_series(), but
> this is a horribly inefficient way to go. Are you sure you aren't
> looking at table design issue?
I actually did a sort of this, by having an intermediate table which
decomposes the array into separate table and have the aggregate
computted then. But his is hirrible, unreadable and not usefull only in
case of single dimention array.
I'm gathering measurement data, which most suitable go into an array.
The data is time related and array is used to bin it up.... into a
variable number of bins - array of variable size.
Any hints on how can I arrange that sort of data (preferably into an
array) and subsequently be able to compute statistics functions on all
those bins at the same time?
-R
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-08-12 13:36:11 | Re: InitDB: Bad system call |
Previous Message | Ivan Sergio Borgonovo | 2010-08-12 12:37:46 | Re: delete query taking way too long |