From: | "Jim C(dot) Nasby" <decibel(at)decibel(dot)org> |
---|---|
To: | Ramon Bastiaans <bastiaans(at)sara(dot)nl> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: multi billion row tables: possible or insane? |
Date: | 2005-03-04 22:05:07 |
Message-ID: | 20050304220507.GH2209@decibel.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Mar 01, 2005 at 10:34:29AM +0100, Ramon Bastiaans wrote:
> Hi all,
>
> I am doing research for a project of mine where I need to store several
> billion values for a monitoring and historical tracking system for a big
> computer system. My currect estimate is that I have to store (somehow)
> around 1 billion values each month (possibly more).
On a side-note, do you need to keep the actual row-level details for
history? http://rrs.decibel.org might be of some use.
Other than that, what others have said. Lots and lots of disks in
RAID10, and opterons (though I would choose opterons not for memory size
but because of memory *bandwidth*).
--
Jim C. Nasby, Database Consultant decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Turner | 2005-03-05 00:15:55 | Re: multi billion row tables: possible or insane? |
Previous Message | Dave Held | 2005-03-04 20:59:01 | MAIN vs. PLAIN |