Re: I need your help to get opinions about this situation

From: Greg Williamson <gwilliamson39(at)yahoo(dot)com>
To: Rayner Julio Rodríguez Pimentel <rayner(dot)jrp(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: I need your help to get opinions about this situation
Date: 2011-03-04 01:05:37
Message-ID: 994725.43704.qm@web46102.mail.sp1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Rayner --

<...>
> I have a database of 1000 tables, 300 of theirs are of major growing
> with 10000 rows daily, the estimate growing for this database is of
> 2,6 TB every year.

In and of-itself sheer number of rows only hits you when you need to be
reading most of them; in that case good hardware (lots of spindles!) would
be needed for any database.

> There are accessing 5000 clients to this database of which will be
> accessed 500 concurrent clients at the same time.

That could be too many to handle natively; investigate pgPool and similar tools.

> There are the questions:
> 1. Is capable PostgreSQL to support this workload? Some examples
> better than this.

Depends on the native hardware and the types of queries.

> 2. It is a recommendation to use a cluster with load balancer and
> replication for this situation? Which tools are recommended for this
> purpose?

Depends on what you mean -- there is no multimaster solution in postgreSQL
as far as I know, but if you only need one central servers and R/O slaves there
are several possible solutions (Slony as an add-on as well as the new
capabilities
in the engine itself.

> 3. Which are the hardware recommendations to deploy on servers? CPU,
> RAM memory capacity, Hard disk capacity and type of RAID system
> recommended to use among others like Operating System and network
> connection speed.

RAID-5 is generally a bad choice for databases. The specific answers to these
questions
need more info on workload, etc.

I migrated a fairly large Informix system to postgres a few years ago and the
main issues
had to do with postGIS vs. Informix Spatial Blade; the core tables converted
cleanly; the
users and permissions were also easy. We needed to use pgPool to get the same
number
of connections. This was actually a migration -- from Sun Solaris to Linux so
comparing
the two directly wasn't easy.

We moved "chunks" on the application and tested a lot; spatial data first and
the bookkeeping
and accounting functions and finally the warehouse and large-but-infrequent
jobs.

HTH,

Greg Williamson

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Aleksey Tsalolikhin 2011-03-04 02:15:50 Re: database is bigger after dump/restore - why? (60 GB to 109 GB)
Previous Message Tom Lane 2011-03-03 22:04:50 Re: orphaned?? tmp files