From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Shall I use PostgreSQL Array Type in The Following Case |
Date: | 2010-01-05 14:24:58 |
Message-ID: | 20100105142458.GY5407@samason.me.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jan 04, 2010 at 05:12:56PM -0800, Yan Cheng Cheok wrote:
> Measurement table will have 24 * 50 million rows in 1 day
> Is it efficient to design that way?
>
> **I wish to have super fast write speed, and reasonable fast read speed from the database.**
When writing software there's (almost) always a trade-off between
development time and resulting performance. If you want best
performance, I'd go for a table per "unit type" but this obviously
requires more implementation effort to maintain all these tables.
The data rates you're talking about means that you're going to have to
put quite a bit of effort into performance, doing a simple EAV style
solution you suggested isn't going to scale very well. I'd guess
you're talking about a minimum of 70GB of data per day for your initial
suggestion, whereas a table per unit type will take it to about 10% of
this.
> Or shall I make use of PostgreSQL Array facility?
That may help a bit, but read performance is going to be pretty bad.
--
Sam http://samason.me.uk/
From | Date | Subject | |
---|---|---|---|
Next Message | erobles | 2010-01-05 14:35:03 | zic error to install 8.4.0 |
Previous Message | Alvaro Herrera | 2010-01-05 14:13:49 | Re: access computed field of RECORD variable |