From: | "Albe Laurenz *EXTERN*" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | "Craig James *EXTERN*" <craig_james(at)emolecules(dot)com>, "Greg Smith" <gsmith(at)gregsmith(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: RAID 10 Benchmark with different I/O schedulers |
Date: | 2008-05-07 07:29:05 |
Message-ID: | D960CB61B694CF459DCFB4B0128514C202122018@exadv11.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Craig James wrote:
> This data is good enough for what I'm doing. There were
> reports from non-RAID users that the I/O scheduling could
> make as much as a 4x difference in performance (which makes
> sense for non-RAID), but these tests show me that three of
> the four I/O schedulers are within 10% of each other. Since
> this matches my intuition of how battery-backed RAID will
> work, I'm satisfied. If our servers get overloaded to the
> point where 10% matters, then I need a much more dramatic
> solution, like faster machines or more machines.
I should comment on this as I am the one who reported the
big performance increase with the deadline scheduler.
I was very surprised at this increase myself as I had not seen
any similar reports, so I thought I should share it for whatever
it is worth.
Our SAN *is* a RAID-5 with lots of cache, so there must be a flaw
in your intuition.
Performance measures depend a lot on your hardware and
software setup (e.g. kernel version in this case) and on the
specific load. The load we used was a real life load, collected
over seveal hours and extracted from the log files.
My opinion is that performance observations can rarely be
generalized - I am not surprised that with a different system
and a different load you observe hardly any difference between
"cfq" and "deadline".
For the record, in our test case "noop" performed practically
as good as "deadline", while the other two did way worse.
Like yourself, I have wondered why different I/O scheduling
algorithms should make so much difference.
Here is my home-spun theory of what may happen; tear it apart
and replace it with a better one at your convenience:
Our SAN probably (we're investigating) has its own brains to
optimize I/O, and I guess that any optimization that the kernel
does can only deteriorate performance because the two algorithms
might "step on each other's toes". This is backed by "noop"
performing well.
I believe that caching will not make much difference, because the
cache is way smaller than the database, and whatever is neither in
the shared buffer nor in the kernel filesystem cache is also not
likely to be in the storage system's cache. Remember that our load
was read-only.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Wakeling | 2008-05-07 10:42:23 | Re: multiple joins + Order by + LIMIT query performance issue |
Previous Message | Antoine Baudoux | 2008-05-07 07:23:51 | Re: multiple joins + Order by + LIMIT query performance issue |