From: | Ramon Bastiaans <bastiaans(at)sara(dot)nl> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | multi billion row tables: possible or insane? |
Date: | 2005-03-01 09:34:29 |
Message-ID: | 42243725.6060804@sara.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi all,
I am doing research for a project of mine where I need to store several
billion values for a monitoring and historical tracking system for a big
computer system. My currect estimate is that I have to store (somehow)
around 1 billion values each month (possibly more).
I was wondering if anyone has had any experience with these kind of big
numbers of data in a postgres sql database and how this affects database
design and optimization.
What would be important issues when setting up a database this big, and
is it at all doable? Or would it be a insane to think about storing up
to 5-10 billion rows in a postgres database.
The database's performance is important. There would be no use in
storing the data if a query will take ages. Query's should be quite fast
if possible.
I would really like to hear people's thoughts/suggestions or "go see a
shrink, you must be mad" statements ;)
Kind regards,
Ramon Bastiaans
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Ganss | 2005-03-01 12:40:23 | Re: multi billion row tables: possible or insane? |
Previous Message | PFC | 2005-03-01 05:32:14 | Re: seq scan cache vs. index cache smackdown |