From: | Anton Nikiforov <anton(at)nikiforov(dot)ru> |
---|---|
To: | |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Huge number of raws |
Date: | 2004-03-18 08:59:41 |
Message-ID: | 405964FD.1080506@nikiforov.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Anton Nikiforov пишет:
> Dear All!
> I have a question about how the PostgreSQL will manage a huge number
> of raws.
> I have a projet where each half an hour 10 millions of records will be
> added to the database and they should be calculated, summarized and
> managed.
> I'm planning to have a few servers that will receive something like a
> million records per server and then they will store this data into the
> centeral server in report-ready format.
> I know that one million records could be managed by postgres (i have a
> database with 25 millions of records and it is working just fine)
> But i'm worry about mentioned centeral database that should store 240
> millions of records daily and should collect this data for years.
> I cannot even imagine the needed hardware to collect monthly
> statistics. And my question is - is this task is for postgres, or i
> should think about Oracle or DB2?
> I'm also thinking about replication of data between two servers for
> redundancy, what you could suggst for this?
> And the data migration problem is still an opened issue for me - how
> to make data migration from fast devices (RAID ARRAY) to slower
> devices (MO Library or something like this) still having access to
> this data?
>
And one more question - is there in postgress something like table
partitioning in Oracle to store data according to the some rules, like a
group of data source (IP network or something)?
--
Best regads,
Anton Nikiforov
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2004-03-18 09:15:54 | Re: Smallint - Integer Casting Problems in Plpgsql functions |
Previous Message | Anton Nikiforov | 2004-03-18 08:38:33 | Huge number of raws |