From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | anon permutation <anonpermutation(at)hotmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Merging Data from Multiple DB |
Date: | 2005-01-03 15:20:54 |
Message-ID: | 41D962D6.7090806@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
anon permutation wrote:
> For performance reasons, each branch must has its own database and a
> centralized transactional system is not an option.
>
> I was considering just centralizing primary keys generation, but that
> seems very slow too.
>
> Segmenting primary keys among the branches is doable, but it is too much
> of a maintainence nightmare.
>
> What do you suggest?
We have a similar application. What we did is this:
1. Each database instance is assigned a unique identifier, stored in a 1
row, 1 column table (with a trigger to ensure it stays that way).
2. Write a function that can take two integers, convert them to text,
and concatenate them. In our case we convert to hex and concatenate with
a delimiter character.
3. Write another function, called something like 'nextrowid', that takes
a sequence name as its argument. Use the sequence name to get the next
value from the sequence, lookup the local unique identifier from the
table defined in #1, and pass them both to the function defined in #2.
4. Use nextrowid('seq_name') to generate your primary keys.
HTH,
Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Ben Martin | 2005-01-03 15:23:12 | C function taking a relation and returning a similar relation. |
Previous Message | Reinhard Max | 2005-01-03 15:15:55 | PostgreSQL 8.0.0 RC3 RPMs for SUSE LINUX |