From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | pabloa98 <pabloa98(at)gmail(dot)com>, Rob Sargent <robjsargent(at)gmail(dot)com> |
Cc: | "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at>, "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Could postgres12 support millions of sequences? (like 10 million) |
Date: | 2020-03-22 21:04:33 |
Message-ID: | 73f5c4ac-1bd3-d517-f507-74d042ae1365@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 3/21/20 11:08 AM, pabloa98 wrote:
>
> > As to below that is going to require more thought.
> >
> Still no word on the actual requirement. As someone who believes
> consecutive numbers on digital invoices is simply a mistaken
> interpretation of the paper based system, I suspect a similar error
> here. But again we haven’t really heard, far as I know. Something
> really fishy about 99999999.
> >
>
> It is fishy. This is the thing. The code, element, and group is part of
> a bigger code called item identifier (or ii).
>
> An item identifier is a result of: code || group || element ||
> qualifier (a constant like 55) || check digit coming from some check
> algorithm.
>
> For example:
>
> for a triplet (group, element, code) like (1, 3, 63) the item identifier
> (or ii) is: 630010003558 (the last 8 is the check digit).
> This number is converted to a bigint and stored (and used as PK or FK on
> other tables, etc, etc).
>
> In an item identifier the room is assigned like:
Revised, to make it match above identifier:
>
> 8 digits for code (not padded with 0s)
> 3 digits for group
> 4 digits for element
> 2 digits for qualifier
> 1 digit for the check digit.
> -----------------------------
> 18 digits for item identifier.
>
So the question may actually be:
How do we improve our locking code, so we don't have to spawn millions
of sequences?
What is the locking method you are using?
> The lock part is because we solved a similar problem with a counter by
> row locking the counter and increasing it in another part of the
> database. The result is that all the queries using that table are queued
> by pair (group, element) that is not that bad because we are not
> inserting thousands of rows by second. Still is killing cluster
> performance (but performance is still OK from the business point of
> view). The problem using locks is that they are too sensitive to
> developer errors and bugs. Sometimes connected clients aborts and the
> connection is returned to the pool with the lock active until the
> connection is closed or someone unlocks the row. I would prefer to have
> something more resilient to developers/programming errors, if possible.
>
>
>
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Matt Magoffin | 2020-03-22 21:48:00 | Re: Duplicate key violation on upsert |
Previous Message | Adrian Klaver | 2020-03-22 20:44:19 | Re: Duplicate key violation on upsert |