From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | satish satish <satish_ach2003(at)yahoo(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Data Corruption in case of abrupt failure |
Date: | 2004-03-10 03:58:33 |
Message-ID: | 20040310035833.GA31629@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Mar 03, 2004 at 04:27:33 -0800,
satish satish <satish_ach2003(at)yahoo(dot)com> wrote:
> Hi,
>
> I am trying to do some reliability tests on postgre SQL. I have use-case where the power can go off abruptly. I initiated 10,000 insert operations and pulled out the cable in the middle. I had auto-commit option turned on. I observed 2 out of 5 times the tables were totally corrupted and could not read any data whereas 3 times I was able to read the data which was inserted.
>
> Is there any way that I could avoid that data corruption and ensure that atleast the records inserted till that point are available in the database. Or are there any tools through which I can recover the data in case the database gets corrupted?
Are you using IDE disks with write caching enabled? If so that is probably
your problem.
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2004-03-10 05:05:39 | Re: How do I change column details |
Previous Message | Greg Stark | 2004-03-10 03:56:38 | Re: Question on Opteron performance |