From: | Camm Maguire <camm(at)enhanced(dot)com> |
---|---|
To: | Mike Castle <dalgoda(at)ix(dot)netcom(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Sequences in transaction |
Date: | 2000-12-11 16:14:42 |
Message-ID: | 54g0jvx7d9.fsf@intech19.enhanced.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greetings! I've just found .. nothing! This works pretty well to my
surprise. Thanks so much for the suggestion. I did a little rewrite
which builds a doubly-linked list table of dates, with prior date and
next date columns maintained by triggers. I then retrieve adjacent
pairs of data table rows via a merge with this table. This appears to
be faster than issuing a subselect .... order by ... limit 1 for
each data row, but your key idea (to me, at least) is that I can avoid
sequential sequence numbers by making explicit reference to the order
of the date values themselves.
Thanks again!
Mike Castle <dalgoda(at)ix(dot)netcom(dot)com> writes:
> On Tue, Dec 05, 2000 at 12:03:40PM -0500, Camm Maguire wrote:
> > need to be able to *quickly* select a pair of *adjacent* rows in a
> > table. t2.seq = t1.seq + 1 seems to work pretty well. Of course, I
>
> What's wrong with a select ... order by .. limit 2 ?
>
> mrc
> --
> Mike Castle Life is like a clock: You can work constantly
> dalgoda(at)ix(dot)netcom(dot)com and be right all the time, or not work at all
> www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc
> We are all of us living in the shadow of Manhattan. -- Watchmen
>
>
--
Camm Maguire camm(at)enhanced(dot)com
==========================================================================
"The earth is but one country, and mankind its citizens." -- Baha'u'llah
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2000-12-11 16:42:25 | Re: one other big mysql->postgresql item |
Previous Message | Tom Lane | 2000-12-11 15:59:46 | Re: What's faster: value of 0 or NULL with index |