From: | David Boreham <david_list(at)boreham(dot)org> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | pgsql-general(at)postgresql(dot)org, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: [PERFORM] PostgreSQL - case studies |
Date: | 2010-02-10 16:10:31 |
Message-ID: | 4B72DA77.2050802@boreham.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
Kevin Grittner (Kevin(dot)Grittner(at)wicourts(dot)gov) wrote:
>>> Could some of you please share some info on such scenarios- where
>>> you are supporting/designing/developing databases that run into at
>>> least a few hundred GBs of data (I know, that is small by todays'
>>> standards)?
>>>
At NuevaSync we use PG in a one-database-per-server design, with our own
replication system between cluster nodes. The largest node has more than
200G online.
This is an OLTP type workload.
From | Date | Subject | |
---|---|---|---|
Next Message | Massa, Harald Armin | 2010-02-10 16:26:36 | Re: more than 2GB data string save |
Previous Message | Peter Hunsberger | 2010-02-10 16:10:24 | Re: more than 2GB data string save |
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Graf | 2010-02-10 16:43:43 | Re: How exactly PostgreSQL allocates memory for its needs? |
Previous Message | Stephen Frost | 2010-02-10 15:55:03 | Re: [PERFORM] PostgreSQL - case studies |