From: | Ed Loehr <eloehr(at)austin(dot)rr(dot)com> |
---|---|
To: | "Ross J(dot) Reedstrom" <reedstrm(at)wallace(dot)ece(dot)rice(dot)edu> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: [GENERAL] using ID as a key |
Date: | 2000-02-07 19:23:10 |
Message-ID: | 389F1B9E.C845A107@austin.rr.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Ross J. Reedstrom" wrote:
>
> > ... One way to do a programmatic ID
> > generation as David B. suggests is to have a DB server whose purpose
> > is to issue "seed numbers" which designate, through use of a
> > pre-determined multiplier, a *range* of allocated IDs guaranteed to be
> > unique across all seed requesters. This avoids both of the problems
> > raised above (throughput and synchronization).
>
> ... sounds like a very nice description of the pgsql sequence object.
>
> In particular, the 'cache' option sets the number of sequence numbers
> allocated to a particular backend for any call to 'nextval' on that
> sequence, exactly as you described. ...
>
> Also, cache size is a property of the sequence object, rather than the
> connection or individual call, so even if your program knows it's going
> to need, say 1000 sequence values at a crack, but another backend is
> only going to need one at a time, and rarely, there's no way to optimize
> the cache size for both backends. Might be an intersting extension to
> the sequence object: a call that increases the cache size for this
> connection only: that way, if your doing a bulk load, for example,
> you can grab numbers in large blocks, and just fire away.
Ya know, I'd read (and totally forgotten) the cache option on postgres
sequence objects. But yes, it sounds like a very similar idea in a
different implementation (and not my idea).
I see your point about the missing ability to optimize the allocation
range. Our approach had a built-in assumption that all needs were for
roughly the same range size (lots of webservers, relatively balanced
load), which was fine. Your idea definitely would add useful
flexibility.
We didn't use transactions in our implementation, so it's an open
question for me. Re managing concurrent transactions and ID
generation, one possibility in a many-host system in which scalability
matters might be to have the "local" servers keep their own (partial?)
database and manage the transaction issue with a "local" sequence
object. It would have to make sure to constrain the domain of the
local sequence object to that range which was allocated from the
central seed server. I suppose one might need the central seed server
to be using a sequence object as well.
Cheers,
Ed Loehr
From | Date | Subject | |
---|---|---|---|
Next Message | Ed Loehr | 2000-02-07 19:37:21 | Re: [GENERAL] using ID as a key |
Previous Message | Keith G. Murphy | 2000-02-07 18:59:34 | Re: [GENERAL] using ID as a key |