On 06/05/2010 06:54 PM, Scott Marlowe wrote:
> On Sat, Jun 5, 2010 at 5:03 PM, Jon Schewe <jpschewe(at)mtu(dot)net> wrote:
>
>>
>> On 06/05/2010 05:52 PM, Greg Smith wrote:
>>
>>> Jon Schewe wrote:
>>>
>>>>> If that's the case, what you've measured is which filesystems are
>>>>> safe because they default to flushing drive cache (the ones that take
>>>>> around 15 minutes) and which do not (the ones that take >=around 2
>>>>> hours). You can't make ext3 flush the cache correctly no matter what
>>>>> you do with barriers, they just don't work on ext3 the way PostgreSQL
>>>>> needs them to.
>>>>>
>>>>>
>>>>>
>>>> So the 15 minute runs are doing it correctly and safely, but the slow
>>>> ones are doing the wrong thing? That would imply that ext3 is the safe
>>>> one. But your last statement suggests that ext3 is doing the wrong
>>>> thing.
>>>>
>>>>
>>> I goofed and reversed the two times when writing that. As is always
>>> the case with this sort of thing, the unsafe runs are the fast ones.
>>> ext3 does not ever do the right thing no matter how you configure it,
>>> you have to compensate for its limitations with correct hardware setup
>>> to make database writes reliable.
>>>
>>>
>> OK, so if I want the 15 minute speed, I need to give up safety (OK in
>> this case as this is just research testing), or see if I can tune
>> postgres better.
>>
> Or use a trustworthy hardware caching battery backed RAID controller,
> either in RAID mode or JBOD mode.
>
Right, because the real danger here is if the power goes out you can end
up with a scrambled database, correct?