Re: Could postgres12 support millions of sequences? (like 10 million)

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: pabloa98 <pabloa98(at)gmail(dot)com>
Cc: "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at>, "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Could postgres12 support millions of sequences? (like 10 million)
Date: 2020-03-21 16:47:17
Message-ID: 2c994f4f-6627-1c4d-fac5-d3c442d197e9@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 3/20/20 8:13 PM, pabloa98 wrote:
>
> Nothing I saw that said int could not become bigint.
>
>
> My bad. The code cannot be a bigint. Or it could be a bigint between 1
> to 99999999 :)

Aah, that was the counter Peter was talking about. I missed that.

As to below that is going to require more thought.

> I thought it was not important. The code could be a number from 1 to
> 99999999 (so an Int will be OK) assigned in order-ish. This is because
> of business requirements. The code should not change in case a row is
> deleted. That rules out using windowed functions. At least for this
> problem. There could be some gaps if they are not too big (similar to a
> serial column when a transaction is rolled back). We are more concerned
> about code generation collisions (for example 2 transactions calculating
> the same code value) than gaps. For each pair (group, element) the code,
> once assigned should remain constant. This is because the group, the
> element, and the code will become an id (business-related). So:
>
> group, element, code = bid
> 1, 3, 1 = bid 131
> 2, 1, 1 = bid 211
> etc
>
> This calculated bid follows the convention described here and it is used
> as a reference in other tables. Therefore it should not change.
> All this weirdness is because of business requirements. I am good with a
> classic serial column. But that will not map our business rules.
>
> Regarding to the implementation of this. Our concern is to choose
> something not slow (it does not need to be super fast because there will
> be more operations in other tables using the same transaction) and I
> thought that creating/dropping sequences could be a solution. But I was
> not sure. I am not sure how it will behave with millions of sequences.
>
> If there is another solution different than millions of sequences that
> do not block, generate few gaps (and those gaps are small) and never
> generate collisions then I will happily use it.
>
> I hope I described the problem completely.
>
> Pablo
>
>
>

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rob Sargent 2020-03-21 17:02:41 Re: Could postgres12 support millions of sequences? (like 10 million)
Previous Message John W Higgins 2020-03-21 04:04:12 Re: Could postgres12 support millions of sequences? (like 10 million)