From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Brian Hurt <bhurt(at)janestcapital(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org, pgsql-advocacy(at)postgresql(dot)org |
Subject: | Re: [PERFORM] Postgres and really huge tables |
Date: | 2007-01-18 21:52:58 |
Message-ID: | 6593.1169157178@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-advocacy pgsql-performance |
Brian Hurt <bhurt(at)janestcapital(dot)com> writes:
> Is there any experience with Postgresql and really huge tables? I'm
> talking about terabytes (plural) here in a single table.
The 2MASS sky survey point-source catalog
http://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec2_2a.html
is 470 million rows by 60 columns; I don't have it loaded up but
a very conservative estimate would be a quarter terabyte. (I've
got a copy of the data ... 5 double-sided DVDs, gzipped ...)
I haven't heard from Rae Stiening recently but I know he's been using
Postgres to whack that data around since about 2001 (PG 7.1 or so,
which is positively medieval compared to current releases). So at
least for static data, it's certainly possible to get useful results.
What are your processing requirements?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Luke Lonergan | 2007-01-18 22:41:30 | Re: Postgres and really huge tables |
Previous Message | Chris Mair | 2007-01-18 21:42:40 | Re: Postgres and really huge tables |
From | Date | Subject | |
---|---|---|---|
Next Message | Jeremy Haile | 2007-01-18 21:53:21 | Re: Autoanalyze settings with zero scale factor |
Previous Message | Chris Mair | 2007-01-18 21:42:40 | Re: Postgres and really huge tables |