From: | "Steve Wolfe" <steve(at)iboats(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Row Limit on tables |
Date: | 2002-05-31 18:57:16 |
Message-ID: | 001601c208d4$fd1fbdc0$d281f6cc@iboats.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> > In practice, "what will fit on your disk" is the limit.
>
> Actually, not even. Or so I think for most cases.
(snip)
> I can fairly cheaply halve this problem by striping the database across
> two disks, but then I double the space available. If that leads me to
> double the database size, I'm back in the same hole I was in before,
> maybe worse.
You seem to be confusing the question "How many rows can I have in a
table?" with "How fast will query (X) run on a table with (Y) rows?"
The question and documents we're talking about are dealing with hard
ceilings, not with performance. In fact, trying to give performance
estimates for such situations is kind of silly - queries on such tables
could be anywhere from very, very fast to very, very slow. Many, many
factors are involved.
> It's way, way too easy these days to run up a terrabyte of RAID-5
> storage. One Escalade 7850 controller ($500) plus eight 160 GB drives
> ($250 each) sets you out about $2500. But the problem is, all this
> storage often doesn't have the I/O bandwidth you need actually to make
> use of it....
That's one extreme end of the spectrum, why not look at the other? You
could load up on low-latency, 10K RPM 9-gig drives, and have amazing
throughput with only a very small fraction of the total storage capacity.
It's all in what you're looking for.
steve
From | Date | Subject | |
---|---|---|---|
Next Message | Curt Sampson | 2002-06-01 06:09:16 | Re: Row Limit on tables |
Previous Message | Tom Jenkins | 2002-05-31 18:48:08 | plpython diff error.expected & errorr.output |