From: | "Gregory Wood" <gregw(at)com-stock(dot)com> |
---|---|
To: | "PostgreSQL-General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Advice for optimizing queries using Large Tables |
Date: | 2002-03-11 15:45:17 |
Message-ID: | 010201c1c914$916fd9c0$7889ffcc@comstock.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> > I'm working with a table containing over 65 million records in Postgres
v
> > 7.1.3. The machine is a dual-processor Athlon MP1900 (Tyan Tiger board)
with
> > 3GB of PC2100 DDR RAM, and 3-80GB IBM 120GXP hard drives configured in a
> > software RAID 0 Array running under RedHat Linux v. 7.2. Queries don't
seem
> > to be running as fast as "they should".
>
> Have you considered moving to a SCSI setup?
Particularly in light of IBM's recent decision to change the drive specs...
according to IBM you shouldn't have those drives powered on more than 333
hours a month (or roughly 8 hours a day):
"Q: Would you recommend this drive in a server role?
A: No, the drive is intended to be on for no more than about 8 hours a day.
If it were only used during that period and then shut down for the day, then
it would be fine, but it definitely should NOT be used in a 24/7 role for
those customers concerned with reliability."
Quote from: http://www.storagereview.com/
Greg
From | Date | Subject | |
---|---|---|---|
Next Message | Frank_Lupo Frank_Lupo | 2002-03-11 15:47:09 | createlang vbs |
Previous Message | Tom Lane | 2002-03-11 15:02:34 | Re: big problem |