| From: | Brent Wood <b(dot)wood(at)niwa(dot)co(dot)nz> |
|---|---|
| To: | Poul Jensen <flyvholm(at)gfy(dot)ku(dot)dk> |
| Cc: | "John D(dot) Burger" <john(at)mitre(dot)org>, pgsql-general General <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: SQL - planet redundant data |
| Date: | 2005-09-12 21:54:18 |
| Message-ID: | 20050913095224.U98953@storm-user.niwa.co.nz |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
>
> That is exactly what I want, and now I finally see how to do it (I
> think!). However, it is a considerable amount of work to set this up
> manually, plus, it has been a headache realizing how to get there at
> all. I'm hoping that one or more of the developers think it would be a
> good idea for PostgreSQL to perform an internal table optimization
> process using run-length encoding. Imagine you could just throw all your
> data into one table, run OPTIMIZE TABLE and you'd be done. With SQL
> being all about tables I'm surprised this idea (or something even
> better) hasn't been implemented already.
There was a recent brief thread here on storing timeseries data, where the
use of clustered indices for static tables was suggested. This might also
be useful in your situation...
Cheers,
Brent Wood
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tatsuo Ishii | 2005-09-12 22:29:53 | Re: Replication |
| Previous Message | Christian Goetze | 2005-09-12 21:51:06 | Building postgres on Suze |