From: | Markus Schaber <schabios(at)logi-track(dot)com> |
---|---|
To: | Ramon Bastiaans <bastiaans(at)sara(dot)nl> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: multi billion row tables: possible or insane? |
Date: | 2005-03-01 14:01:50 |
Message-ID: | 422475CE.9030507@logi-track.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi, Ramon,
Ramon Bastiaans schrieb:
> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite fast
> if possible.
Which kind of query do you want to run?
Queries that involve only a few rows should stay quite fast when you set
up the right indices.
However, queries that involve sequential scans over your table (like
average computation) will take years. Get faaaaaast I/O for this. Or,
better, use a multidimensional data warehouse engine. Those can
precalculate needed aggregate functions and reports. But they need loads
of storage (because of very redundant data storage), and I don't know
any open source or cheap software.
Markus
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios(at)logi-track(dot)com | www.logi-track.com
From | Date | Subject | |
---|---|---|---|
Next Message | John Arbash Meinel | 2005-03-01 15:19:00 | Re: multi billion row tables: possible or insane? |
Previous Message | Jeff | 2005-03-01 13:37:16 | Re: multi billion row tables: possible or insane? |