Re: Could postgres12 support millions of sequences? (like 10 million)

From: pabloa98 <pabloa98(at)gmail(dot)com>
To: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
Cc: "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at>, "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Could postgres12 support millions of sequences? (like 10 million)
Date: 2020-03-21 03:13:03
Message-ID: CAEjudX5Zq=do9ZGUR-hFPGYp31hCSQUr20ifkeDOUywZpAngjw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> Nothing I saw that said int could not become bigint.
>
>
> My bad. The code cannot be a bigint. Or it could be a bigint between 1 to
99999999 :)
I thought it was not important. The code could be a number from 1 to
99999999 (so an Int will be OK) assigned in order-ish. This is because of
business requirements. The code should not change in case a row is deleted.
That rules out using windowed functions. At least for this problem. There
could be some gaps if they are not too big (similar to a serial column when
a transaction is rolled back). We are more concerned about code generation
collisions (for example 2 transactions calculating the same code value)
than gaps. For each pair (group, element) the code, once assigned should
remain constant. This is because the group, the element, and the code will
become an id (business-related). So:

group, element, code = bid
1, 3, 1 = bid 131
2, 1, 1 = bid 211
etc

This calculated bid follows the convention described here and it is used as
a reference in other tables. Therefore it should not change.
All this weirdness is because of business requirements. I am good with a
classic serial column. But that will not map our business rules.

Regarding to the implementation of this. Our concern is to choose something
not slow (it does not need to be super fast because there will be more
operations in other tables using the same transaction) and I thought that
creating/dropping sequences could be a solution. But I was not sure. I am
not sure how it will behave with millions of sequences.

If there is another solution different than millions of sequences that do
not block, generate few gaps (and those gaps are small) and never generate
collisions then I will happily use it.

I hope I described the problem completely.

Pablo

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David G. Johnston 2020-03-21 03:29:09 Re: Could postgres12 support millions of sequences? (like 10 million)
Previous Message pabloa98 2020-03-21 02:53:43 Re: Could postgres12 support millions of sequences? (like 10 million)