Need software infrastructure advice

From: "Aaron Zeitler" <a(dot)zeitler(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Need software infrastructure advice
Date: 2007-05-23 18:51:28
Message-ID: 922f36a70705231151neebcd11l22d88b3cad528506@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Note: The meat and potatoes of my request is in the "What I would like"
section. Everything else is just explanation.

Currently my company is using software that really isn't meeting our needs.
We have tried to get the 3rd party to allow us to help with their source,
but they won't allow us to help with it (even after we offered to sign
non-disclosure and non-competition contracts). Because of a number of
issues we are looking at developing our own solution. I have been thinking
for sometime about PostgreSQL to help with the solution. I'm just looking
for some advice on how I might pull off the features we would like.

What I would like:
-Data sharing between locations.
-Offline availability (in case Internet connection goes down).
-The ability to make a change at the home office, and have that change made
at the applicable (not necessarily all) remote locations. Currently we have
to log in (VNC) to each location to make a change, I would like to remove
this requirement.
-Company-wide querying built in.

What I have now:
-60 remote locations (in different states) with ADSL or Cable Internet
connection (connection reliability is just like it is for a home user,
sometimes great, sometimes "flakey").
-Each remote location has at least 2 computers, most have 3, a few have 4,
and one has 5. The naming structure is station-##-1, station-##-2, etc.
where the ## is the store number. Computers have Windows 2000 or Windows XP
Pro.
-The current application we have is a MS Access front-end, with a MS Access
back-end shared peer-to-peer. The back end is located on the station 1 of
each location. The other stations at the location link to this back-end.
Each store's data is separate, they do not share data between locations
(though they may share actual clients).
-Backups consist of a batch file that zips the back-end and transfers the
file to our home office via sftp.
-We run MS SQL Server 2000 at the home office. I have connected the MS
Access back-ends as linked servers and run Stored Procedures to combine the
data from the various back-ends into single tables in MS SQL Server. I then
can run company-wide reports against this combined data (something not
normally possible or provided by the software as they are essentially
separate databases).

What I have considered:
-At the most basic I could use the same setup we have now. MS Access
front-end and back-end, but it just be OUR application running it. But this
doesn't meet other "wants" I would like to fulfill.
-Central database at the home office. This would allow us to share data
between stores (which is desirable). The front-end could simply be a web
application. Or it could be a Java app that connects over the Internet. We
used software like this before our current solution years ago (but it wasn't
OUR software). The problem with this was that if the store lost their
Internet connection, they could not help customers. This was not
acceptable.
-Same setup as now (using the Access databases), but providing a web page
linked to the MS SQL Server at the home office. This would allow separate
stores to look up (and thus share) information between each other. The
problems with this is that the data would only be updated daily (I would
like it to be hourly at least) and the data is not integrated into the
software. Integration into the software would be best in my opinion (no
need to log onto a site for each check, no way to forget checking against
other store's data).
-PostgreSQL as a back-end. Each store would have its own PostgreSQL
back-end that would then be replicated to the home office into it's own
database or schema or table. At the home office I would combine the data
into "unioned" versions of what exists locally at each location. I could
then replicate these "unioned" tables back to the remote locations. So a
store would be the master of it's own data, and a slave to the "unioned"
tables, while the home office was a slave to store tables, but a master for
the "unioned" tables. The software would store it's customer's data in its
regular tables, but would perform checks against the "master" tables to see
the customer currently has data somewhere else. I feel that this would
allow data to be available, and customers could still be helped, even if the
Internet went down for a few minutes (or days). The main problem with this
solution is that Slony-1 says that you really shouldn't have more than a
dozen places that you replicate to, and I have 60. I would also like to be
able to go well beyond 60 (so I am looking for a scalable solution). Also,
sometimes the Internet connection to a location goes down. Some are more
"flakey" than others. Slony-1 says it is not for this type of situation.

What I need:
-Suggestions. I am open to just about anything. This is a long-term
project. We certainly don't have unlimited resources, but we've paid around
$300k (most of the cost being custom programming) over the last 4 years for
a solution that doesn't really work for us. I'm sure that we could spend
some decent money on something that we will own forever. I could use Java,
AJAX, C++, plain HTML, or whatever as the front-end (I may need to learn it
or hire help for it). I'm open to any kind of back-end, but I was looking
to open source (specifically PostgreSQL) to save on licensing costs.

Any advice is appreciated.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message George Pavlov 2007-05-23 19:04:45 Re: query log corrupted-looking entries
Previous Message Joshua D. Drake 2007-05-23 18:43:40 Re: Tsearch2: upgrading from postgres 8.1.x to 8.2.x