From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | George Young <gry(at)ll(dot)mit(dot)edu> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: need to join successive log entries into one |
Date: | 2001-03-15 07:39:18 |
Message-ID: | 3AB071A6.E07AB2@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
George Young wrote:
>
> On Wed, 14 Mar 2001, you wrote:
> > On 3/14/01, 5:24:12 PM, George Young <gry(at)ll(dot)mit(dot)edu> wrote regarding [SQL]
> > I need to join successive log entries into one:
> > > I have a table like:
> >
> > > run | seq | start | done
> > > 1415| 261| 2001-01-29 12:36:55| 2001-02-07 13:02:38
> > > 1415| 263| 2001-02-14 07:40:04| 2001-02-15 16:05:04
> > > 1415| 264| 2001-02-16 16:05:05| 2001-03-08 16:34:03
> > > 1415| 265| 2001-03-08 16:34:04|
> >
> > Try:
> >
> > select run,min(start),max(done) from mytable group by run;
>
> Alas, this combines *all* entries for a given run, not just those that
> are imediately adjacent (in time, or by 'seq' number)...
I thought it was complicated, then I thought it was easy. Looks like I
was right first time.
I was thinking that some huge self-join might do it, but I can't see how
to go beyond a run of two adjacent entries.
The only thing I can think of is to add a "batch" column and build a
trigger to set it as data is inserted. I'm assuming the entries are put
in one at a time and in order. That way you just need to look at the
last entry to determine if the new one is in the same batch.
Any use?
- Richard Huxton
> --
> George Young, Rm. L-204 gry(at)ll(dot)mit(dot)edu
> MIT Lincoln Laboratory
> 244 Wood St.
> Lexington, Massachusetts 02420-9108 (781) 981-2756
From | Date | Subject | |
---|---|---|---|
Next Message | Sondaar Roelof | 2001-03-15 10:29:00 | How to cast text to cidr/inet |
Previous Message | Jeff Putsch | 2001-03-15 06:47:29 | Help with UPDATE syntax |