From: | Peter Crabtree <peter(dot)crabtree(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Generating Lots of PKs with nextval(): A Feature Proposal |
Date: | 2010-05-14 21:52:02 |
Message-ID: | AANLkTilq_4JxDIU_u-F7W2fWfttE21A5GugQKnp0_Tzw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, May 14, 2010 at 5:27 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Peter Crabtree <peter(dot)crabtree(at)gmail(dot)com> writes:
>> Now, I was reminded that I could simply do this:
>
>> SELECT nextval('my_seq') FROM generate_series(1, 500);
>
>> But of course then I would have no guarantee that I would get a
>> contiguous block of ids,
>
> The existing "cache" behavior will already handle that for you,
> I believe. I don't really see a need for new features here.
I don't see how that works for this case, because the "cache" setting
is "static", and also shared between sessions. So if I have 10 records
one time, and 100 records the next, and 587 the third time, what
should my CACHE be set to for that sequence?
And if I do ALTER SEQUENCE SET CACHE each time, I have either killed
concurrency (because I'm locking other sessions out of using that
sequence until I'm finished with it), or I have a race condition (if
someone else issues an ALTER SEQUENCE before I call nextval()). The
same problem exists with using ALTER SEQUENCE SET INCREMENT BY.
Peter
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Crabtree | 2010-05-14 21:58:59 | Re: Generating Lots of PKs with nextval(): A Feature Proposal |
Previous Message | Josh Berkus | 2010-05-14 21:51:18 | Re: max_standby_delay considered harmful |