Re: Huge number of raws

From: Francisco Reyes <lists(at)natserv(dot)com>
To: Anton Nikiforov <anton(at)nikiforov(dot)ru>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Huge number of raws
Date: 2004-03-18 23:27:58
Message-ID: 20040318232030.H22828@zoraida.natserv.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 18 Mar 2004, Anton Nikiforov wrote:

> But i'm worry about mentioned centeral database that should store 240
> millions of records daily and should collect this data for years.

I have not worked with anything even remotely so big.
A few thougths..
I think this is more of a hardware issue than a PostgreSQL issue. I think
a good disk subsystem will be a must. Last time I was looking for my ex
employer at large disk subsystems I think the one we were leaning towards
was an IBM disk subsystem. I think it was in the $100,000 range.

Regardless of architecture (ie PC, SUN, etc..) SMP may be of help if you
have concurrent users. Lots and lots of memory will help too.

> And the data migration problem is still an opened issue for me - how to
> make data migration from fast devices (RAID ARRAY) to slower devices (MO
> Library or something like this) still having access to this data?

Don't follow you there. You mean backup?
You can make a pg_dump of the data while the DB is running and then back
that up.

Or were you talking about something else like storing different data in
different media speeds? (Like Hierarchical Storage Management)

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Steve Krall 2004-03-19 00:37:54 pg_dump on older version of postgres eating huge amounts of memory
Previous Message Yesid Ortiz Medina 2004-03-18 22:46:31 How to use SPI?