From: | "Rodrigo Sakai" <rodrigo(dot)sakai(at)poli(dot)usp(dot)br> |
---|---|
To: | "Brad Nicholson" <bnichols(at)ca(dot)afilias(dot)info>, "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
Cc: | "Michael Glaesemann" <grzm(at)myrealbox(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Temporal Databases |
Date: | 2006-03-08 15:56:38 |
Message-ID: | 003001c642c8$e2fd47a0$1401a8c0@netsis01xp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ok, but actually I'm not concerned about logging old values. I'm concerned
about checking temporal constraints. Entity Integrity (PK) and Referential
Integrity (FK).
For example, if you have the salary table:
Salary (employee_id, salary, start_date, end_date)
Where [star_date, end_date] is an interval. Means that the salary is (was)
valid in that period of time.
I have to avoid this occurrence:
001
1000
2005-20-01
2005-20-12
001
2000
2005-20-06
2006-20-04
So, is needed to compare intervals, not only atomic values. If you want to
know which was the salary on 2005-25-07, is not possible. It is
inconsistent!!!
Of course I can develop some functions and triggers to accomplish this work.
But the idea is to keep simple for the developers, just simple as declare a
primary key!
Thanks for your attention!!
----- Original Message -----
From: "Brad Nicholson" <bnichols(at)ca(dot)afilias(dot)info>
To: "Simon Riggs" <simon(at)2ndquadrant(dot)com>
Cc: "Rodrigo Sakai" <rodrigo(dot)sakai(at)poli(dot)usp(dot)br>; "Michael Glaesemann"
<grzm(at)myrealbox(dot)com>; <pgsql-general(at)postgresql(dot)org>
Sent: Friday, February 24, 2006 1:56 PM
Subject: Re: [GENERAL] Temporal Databases
> Simon Riggs wrote:
>
>>A much easier way is to start a serialized transaction every 10 minutes
>>and leave the transaction idle-in-transaction. If you decide you really
>>need to you can start requesting data through that transaction, since it
>>can "see back in time" and you already know what the snapshot time is
>>(if you record it). As time moves on you abort and start new
>>transactions... but be careful that this can effect performance in other
>>ways.
>>
>>
>
> We're currently prototyping a system (still very much in it's infancy)
> that uses the Slony-I shipping mechanism to build an off line temporal
> system for point in time reporting purposes. The idea being that the log
> shipping files will contain only the committed inserts, updates and
> deletes. Those log files are then applied to an off line system which has
> a trigger defined on each table that re-write the statements, based on
> the type of statement, into a temporally sensitive format.
>
> If you want to get an exact point in time snapshot with this approach, you
> are going to have to have timestamps on all table in your source database
> that contain the exact time of the statement table. Otherwise, a best
> guess (based on the time the slony sync was generated) is the closest that
> you will be able to come.
>
> --
> Brad Nicholson 416-673-4106 Database Administrator, Afilias Canada
> Corp.
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Emil Rachovsky | 2006-03-08 16:11:08 | Errors ignored on restore |
Previous Message | Rodrigo Sakai | 2006-03-08 15:38:01 | Re: Temporal Databases |