From: | "William Temperley" <willtemperley(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org(dot) |
Subject: | Design decision advice |
Date: | 2008-08-13 14:41:06 |
Message-ID: | 439dc11e0808130741s3fcd70f6yb1943d8034b0b9b@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Dear all
I'd really appreciate a little advice here - I'm designing a PG
database to manage a scientific dataset.
I've these fairly clear requirements:
1. Multiple users of varying skill will input data.
2. Newly inserted data will be audited and marked good / bad
3. We must have a dataset that is frozen or "known good" to feed into
various models.
This, as far as I can see, leaves me with three options:
A. Two databases, one for transaction processing and one for
modelling. At arbitrary intervals (days/weeks/months) all "good" data
will be moved to the modelling database.
B. One database, where all records will either be marked "in" or
"out". The application layer has to exclude all data that is out.
C. Sandbox tables for all tables updated by the application.
I prefer option A, this gives me the flexibility to run heavy
modelling queries on a separate server, but I'm not sure how best to
deal with the replication issues when moving to the modelling db.
Option B makes me think of hard to diagnose bugs with queries looking
at different datasets, for example.
With option C, if both tables tX and tY have sandbox tables sX and sY,
there could be problems where sX needs to reference data in sY, but
has a foreign key referencing tY
What would you guys do? Have I missed a better option here?
Thanks
Will T
From | Date | Subject | |
---|---|---|---|
Next Message | Martin Marques | 2008-08-13 14:45:32 | Re: Newbie [CentOS 5.2] service postgresql initdb |
Previous Message | Tom Lane | 2008-08-13 14:29:03 | Re: Newbie [CentOS 5.2] service postgresql initdb |