From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Sameer Thakur <samthakur74(at)gmail(dot)com>, Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Merge requirements between offline clients and central database |
Date: | 2014-09-05 13:55:24 |
Message-ID: | 1409925324.17935.YahooMailNeo@web122302.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Sameer Thakur <samthakur74(at)gmail(dot)com> wrote:
> As part of database evaluation one key requirements is as follows:
>
> 1. There are multiple thick clients (say 20 ~ 100) with their local
> databases accepting updates
> 2. They sync data with a central database which can also receive updates itself.
> 3. They may not be connected to central database all the time.
> 4. The central database receives and merges client data, but does not
> push data back to clients i.e. data between clients is not synced via
> central database or any other way.
> Is there anyway PostgreSQL, with any open source tool, can support
> such a scenario? How close can it be met?
To avoid collisions, each of the thick clients, as well as the
central machine, must have an ID or an distinct range of key values
to assign. I don't know how many tables are involved, but it might
not be crazy to have the central machine partition the tables on
whatever distinguishes the sources, and use Slony to replicate from
each thick client to the individual partitions on the central
database.
I'm sure there are dozens of plausible techniques that could be
proposed, and based on the minimal detail provided I can't be sure
that this is the best; it's just the first idea that came to mind.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | John McKown | 2014-09-05 14:46:11 | Re: Employee modeling question |
Previous Message | Nelson Green | 2014-09-05 12:52:14 | Re: Employee modeling question |