From: | Stefan Keller <sfkeller(at)gmail(dot)com> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | PG Schema to be used as log and monitoring store |
Date: | 2017-12-09 19:22:02 |
Message-ID: | CAFcOn2-haFL3N6KFzK-zjJHTWnNSE05+iTWy1ahJFFzWbwn4Gg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
Given this kind of sensors (Internet-of-Things) log and monitoring scenario:
* There are 3 production machines monitored every few seconds for
forthcoming (~2) years.
* Machine m1 is emitting 20 boolean and 20 float4 captured in sensors
(m1s1..m1s40).
* Machine m2 has same attributes as m1 plus 10+10 more (m2s1..m2s20).
* Machine m3: like m2 but half of the attributes are different.
* Queries are happening once every day, like:
SELECT m1s1,m1s2 FROM m1 WHERE logged BETWEEN '2017-11-01' AND '2017-11-30'".
So this is a kind of an "Immutable DB" with where there are
* rather static schema with sources which have overlapping attributes
* heavy writes,
* periodic reads
Would you model this schema also like my proposition, which saves
place but makes it little bit more complex to insert/update due to the
arrays?
create table m1 (
id bigint,
created timestamp,
b20 bit(20) default b'00000000000000000000',
farr20 float8[20]
);
:Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | James Keener | 2017-12-09 19:48:01 | Re: PG Schema to be used as log and monitoring store |
Previous Message | Joshua D. Drake | 2017-12-09 19:17:57 | Re: Future of PlPython2 |