| From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> | 
|---|---|
| To: | Andrew Sullivan <andrew(at)libertyrms(dot)info> | 
| Cc: | pg <pgsql-general(at)postgresql(dot)org> | 
| Subject: | Re: Serious Crash last Friday | 
| Date: | 2002-07-10 20:16:53 | 
| Message-ID: | Pine.LNX.4.44.0207101414510.1682-100000@css120.ihs.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
On Wed, 10 Jul 2002, Andrew Sullivan wrote:
> On Wed, Jul 10, 2002 at 05:19:47PM +0200, Henrik Steffen wrote:
> > Hi,
> > 
> > thanks for the information...
> > 
> > the badblocks read-only test did not report any problems,
> > do you think i should run the "read-write" test, too?
> 
> Well, if you do it'll destoy the data, so although it's the only way
> to be sure, I wouldn't unless absolutely pushed to do so.  A
> read-write badblocks test on a big partition can take many hours.
This isn't entirely true. According to bad blocks' man page:
-n     Use  non-destructive  read-write  mode.  By default
       only a  non-destructive  read-only  test  is  done.
       This  option  must  not  be  combined  with  the -w
       option, as they are mutually exclusive.
So, with the -n switch, badblocks will save a sector, do a write / read 
test, then restore the sector.
Note that this is pretty slow, as I've tested it before.
> > tonight I will have the memory checked by memtest86 ...
> 
> Yes, that seems a good idea.  Brand new hardware doesn't guarantee
> anything, particularly when memory is so fast these days (I've had
> DIMMs fail a couple of months after they were new).
Also, another REALLY good test for bad memory is to build postgresql from 
source a couple dozen times, especially with a -j switch set to about 6 or 
so.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Andrew Sullivan | 2002-07-10 20:35:47 | Re: Serious Crash last Friday | 
| Previous Message | Doug Fields | 2002-07-10 19:58:51 | Re: Linux max on shared buffers? |