Re: Weird XFS WAL problem

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Bruce Momjian" <bruce(at)momjian(dot)us>
Cc: "Greg Smith" <greg(at)2ndquadrant(dot)com>, "Craig James" <craig_james(at)emolecules(dot)com>, "Matthew Wakeling" <matthew(at)flymine(dot)org>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Weird XFS WAL problem
Date: 2010-06-04 15:35:51
Message-ID: 4C08D7070200002500031F66@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> Kevin Grittner wrote:

>> Any decent RAID controller will ensure that the drives themselves
>> aren't using write-back caching. When we've mentioned write-back
>> versus write-through on this thread we've been talking about the
>> behavior of the *controller*. We have our controllers configured
>> to use write-back through the BBU cache as long as the battery is
>> good, but to automatically switch to write-through if the battery
>> goes bad.
>
> OK, good, but when why would a BBU RAID controller flush stuff to
> disk with a flush-all command? I thought the whole goal of BBU
> was to avoid such flushes.

That has been *precisely* my point.

I don't know at the protocol level; I just know that write barriers
do *something* which causes our controllers to wait for actual disk
platter persistence, while fsync does not.

The write barrier concept seems good to me, and I wish it could be
used at the OS level without killing performance. I blame the
controller, for not treating it the same as fsync (i.e., as long as
it's in write-back mode it should treat data as persisted as soon as
it's in BBU cache).

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Bruce Momjian 2010-06-04 15:41:43 Re: Weird XFS WAL problem
Previous Message Bruce Momjian 2010-06-04 15:30:10 Re: Weird XFS WAL problem