From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
Cc: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 1 TB of memory |
Date: | 2006-03-17 21:28:07 |
Message-ID: | b42b73150603171328y940b0e5q3027fd46f6331ad7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 3/17/06, Luke Lonergan <llonergan(at)greenplum(dot)com> wrote:
> > Now what happens as soon as you start doing random I/O? :)
> If you are accessing 3 rows at a time from among billions, the problem you
> have is mostly access time - so an SSD might be very good for some OLTP
> applications. However - the idea of putting Terabytes of data into an SSD
> through a thin straw of a channel is silly.
I'll 'byte' on this..right now the price for gigabyte of ddr ram is
hovering around 60$/gigabyte. If you conveniently leave aside the
problem of making ddr ram fault tolerant vs making disks tolerant, you
are getting 10 orders of magnitude faster seek time and unlimited
bandwidth...at least from the physical device. While SANs are getting
cheaper they are still fairly expensive at 1-5$/gigabyte depending on
various factors. You can do the same tricks on SSD storage as with
disks.
SSD storage is 1-2k$/gigabyte currently, but I think there is huge
room to maneuver price-wise after the major players recoup their
investments and market forces kick in. IMO this process is already in
play and the next cycle of hardware upgrades in the enterprise will be
updating critical servers with SSD storage. Im guessing by as early
2010 a significant percentage of enterpise storage will be SSD of some
flavor.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2006-03-17 21:41:38 | Re: Help optimizing a slow index scan |
Previous Message | Dan Harris | 2006-03-17 20:44:54 | Re: Help optimizing a slow index scan |