From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | Jayadevan M <Jayadevan(dot)Maymala(at)ibsplc(dot)com>, pgsql-general(at)postgresql(dot)org, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: [PERFORM] PostgreSQL - case studies |
Date: | 2010-02-10 15:55:03 |
Message-ID: | 20100210155503.GB17756@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
* Kevin Grittner (Kevin(dot)Grittner(at)wicourts(dot)gov) wrote:
> > Could some of you please share some info on such scenarios- where
> > you are supporting/designing/developing databases that run into at
> > least a few hundred GBs of data (I know, that is small by todays'
> > standards)?
Just saw this, so figured I'd comment:
tsf=> \l+
List of databases
Name | Owner | Encoding | Collation | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+-------------+-------------+----------------------------+---------+-------------+---------------------------
beac | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres | 1724 GB | pg_default |
Doesn't look very pretty, but the point is that its 1.7TB. There's a
few other smaller databases on that system too. PG handles it quite
well, though this is primairly for data-mining.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Yeb Havinga | 2010-02-10 15:56:59 | Re: Orafce concat operator |
Previous Message | Kevin Grittner | 2010-02-10 15:47:51 | Re: [PERFORM] PostgreSQL - case studies |
From | Date | Subject | |
---|---|---|---|
Next Message | David Boreham | 2010-02-10 16:10:31 | Re: [PERFORM] PostgreSQL - case studies |
Previous Message | Kevin Grittner | 2010-02-10 15:47:51 | Re: [PERFORM] PostgreSQL - case studies |