From: | Anton Nikiforov <anton(at)nikiforov(dot)ru> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Huge number of raws |
Date: | 2004-03-18 08:38:33 |
Message-ID: | 40596009.8030002@nikiforov.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Dear All!
I have a question about how the PostgreSQL will manage a huge number of
raws.
I have a projet where each half an hour 10 millions of records will be
added to the database and they should be calculated, summarized and
managed.
I'm planning to have a few servers that will receive something like a
million records per server and then they will store this data into the
centeral server in report-ready format.
I know that one million records could be managed by postgres (i have a
database with 25 millions of records and it is working just fine)
But i'm worry about mentioned centeral database that should store 240
millions of records daily and should collect this data for years.
I cannot even imagine the needed hardware to collect monthly statistics.
And my question is - is this task is for postgres, or i should think
about Oracle or DB2?
I'm also thinking about replication of data between two servers for
redundancy, what you could suggst for this?
And the data migration problem is still an opened issue for me - how to
make data migration from fast devices (RAID ARRAY) to slower devices (MO
Library or something like this) still having access to this data?
--
Best regads,
Anton Nikiforov
From | Date | Subject | |
---|---|---|---|
Next Message | Anton Nikiforov | 2004-03-18 08:59:41 | Re: Huge number of raws |
Previous Message | Denis Gasparin | 2004-03-18 07:52:24 | Re: Smallint - Integer Casting Problems in Plpgsql functions |