From: | Jorge Godoy <jgodoy(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Design Question (Time Series Data) |
Date: | 2007-10-04 09:34:01 |
Message-ID: | 200710040634.02185.jgodoy@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote:
>
> I did good experience with 2 variant. PostgreSQL needs 24bytes for
> head of every row, so isn't too much efective store one field to one
> row. You can simply do transformation between array and table now.
But then you'll make all SQL operations complex, you will have problems using
aggregators, etc. For example, with a normalized design one can query the
average value of a specific serie using simple commands and given the use of
indices this could be highly optimized. Now, using an array, he'll be doing
a seqscan on every row because he needs to find if there was a value for the
given series, then he'll need extracting those values and finally calculating
the average (I know you can select an element of the array, but it won't be
easy on the planner or the loop to calculate the average because they'll need
to do all that and on every row).
I'd use the same solution that he was going to: normalized table including a
timestamp (with TZ because of daylight saving times...), a column with a FK
to a series table and the value itself. Index the two first columns (if
you're searching using the value as a parameter, then index it as well) and
this would be the basis of my design for this specific condition.
Having good statistics and tuning autovacuum will also help a lot on handling
new inserts and deletes.
--
Jorge Godoy <jgodoy(at)gmail(dot)com>
From | Date | Subject | |
---|---|---|---|
Next Message | Erwin Moller | 2007-10-04 09:52:36 | Postgres8: subselect and optimizer/planner |
Previous Message | Pavel Stehule | 2007-10-04 09:20:19 | Re: Design Question (Time Series Data) |