From: | Whit Armstrong <armstrong(dot)whit(at)gmail(dot)com> |
---|---|
To: | bubba postgres <bubba(dot)postgres(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Time Series on Postgres (HOWTO?) |
Date: | 2011-01-15 00:49:09 |
Message-ID: | AANLkTimyZLXHzvd7OUA=vewMf2wtCy9XUa54s-0L4q4m@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I think you want to look at kdb, onetick, and LIM (those are
commercial). or potentially mongoDB where you could probably store a
compressed ts directly in the db if you want.
If you're not going to store each observation as a row, then why use a
db at all. why not stick to flat files?
-Whit
On Fri, Jan 14, 2011 at 7:41 PM, bubba postgres
<bubba(dot)postgres(at)gmail(dot)com> wrote:
> I've been googling, but haven't found a good answer to what I should do if I
> want to store time series in Postgres.
> My current solution is store serialized (compressed) blobs of data.
> (So for example store 1 day worth of 1 minute samples (~1440 samples) stored
> as one row in a bytea. (Plus meta data)
> It would be nice if I could use 1 sample per column,(because updating
> individual columns/samples is clear to me) but postgres doesn't compress the
> row (which is bad because of high amount of repetitive data.. Easily 10X
> bigger.
>
> I've been considering a Double[] array, which would get compressed, but
> before I start down that path (I suppose I need to make some storedprocs to
> update individual samples), has anyone built anything like this? Any open
> source projects I should look at?
>
> Thanks.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Popowich | 2011-01-15 02:14:02 | plpythonu memory leak |
Previous Message | bubba postgres | 2011-01-15 00:41:07 | Time Series on Postgres (HOWTO?) |