From: | "Bucky Jordan" <bjordan(at)lumeta(dot)com> |
---|---|
To: | "Luke Lonergan" <llonergan(at)greenplum(dot)com>, "Spiegelberg, Greg" <gspiegelberg(at)cranel(dot)com>, "Joshua Drake" <jd(at)commandprompt(dot)com> |
Cc: | "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Large tables (was: RAID 0 not as fast as expected) |
Date: | 2006-09-18 14:37:58 |
Message-ID: | 78ED28FACE63744386D68D8A9D1CF5D42099C7@MAIL.corp.lumeta.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>Yes. What's pretty large? We've had to redefine large recently, now
we're
>talking about systems with between 100TB and 1,000TB.
>
>- Luke
Well, I said large, not gargantuan :) - Largest would probably be around
a few TB, but the problem I'm having to deal with at the moment is large
numbers (potentially > 1 billion) of small records (hopefully I can get
it down to a few int4's and a int2 or so) in a single table. Currently
we're testing for and targeting in the 500M records range, but the
design needs to scale to 2-3 times that at least.
I read one of your presentations on very large databases in PG, and saw
mention of some tables over a billion rows, so that was encouraging. The
new table partitioning in 8.x will be very useful. What's the largest DB
you've seen to date on PG (in terms of total disk storage, and records
in largest table(s) )?
My question is at what point do I have to get fancy with those big
tables? From your presentation, it looks like PG can handle 1.2 billion
records or so as long as you write intelligent queries. (And normal PG
should be able to handle that, correct?)
Also, does anyone know if/when any of the MPP stuff will be ported to
Postgres, or is the plan to keep that separate?
Thanks,
Bucky
From | Date | Subject | |
---|---|---|---|
Next Message | Jérôme BENOIS | 2006-09-18 14:44:05 | Re: High CPU Load |
Previous Message | Jérôme BENOIS | 2006-09-18 14:30:51 | Re: High CPU Load |