From: | Mel Gorman <mgorman(at)suse(dot)de> |
---|---|
To: | Gregory Smith <gregsmithpgsql(at)gmail(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Joshua Drake <jd(at)commandprompt(dot)com>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Jim Nasby <jim(at)nasby(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "lsf-pc(at)lists(dot)linux-foundation(dot)org" <lsf-pc(at)lists(dot)linux-foundation(dot)org>, Magnus Hagander <magnus(at)hagander(dot)net> |
Subject: | Re: [Lsf-pc] Linux kernel impact on PostgreSQL performance |
Date: | 2014-01-20 14:46:06 |
Message-ID: | 20140120144606.GT4963@suse.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jan 17, 2014 at 03:24:01PM -0500, Gregory Smith wrote:
> On 1/17/14 10:37 AM, Mel Gorman wrote:
> >There is not an easy way to tell. To be 100%, it would require an
> >instrumentation patch or a systemtap script to detect when a
> >particular page is being written back and track the context. There
> >are approximations though. Monitor nr_dirty pages over time.
>
> I have a benchmarking wrapper for the pgbench testing program called
> pgbench-tools: https://github.com/gregs1104/pgbench-tools As of
> October, on Linux it now plots the "Dirty" value from /proc/meminfo
> over time.
> <SNIP>
Cheers for pointing that out, I was not previously aware of its
existence. While I have some support for running pgbench via another kernel
testing framework (mmtests) the postgres-based tests are miserable. Right
now for me, pgbench is only setup to reproduce a workload that detected a
scheduler regression in the past so that it does not get reintroduced. I'd
like to have it running IO-based tests even though I typically do not
do proper regression testing for IO. I have used sysbench as a workload
generator before but it's not great for a number of reasons.
> I've been working on the problem of how we can make a benchmark test
> case that acts enough like real busy PostgreSQL servers that we can
> share it with kernel developers, and then everyone has an objective
> way to measure changes. These rate limited tests are working much
> better for that than anything I came up with before.
>
This would be very welcome and thanks for the other observations on IO
scheduler parameter tuning. They could potentially be used to evalate any IO
scheduler changes. For example -- deadline scheduler with these parameters
has X transactions/sec throughput with average latency of Y millieseconds
and a maximum fsync latency of Z seconds. Evaluate how well the out-of-box
behaviour compares against it with and without some set of patches. At the
very least it would be useful for tracking historical kernel performance
over time and bisecting any regressions that got introduced. Once we have
a test I think many kernel developers (me at least) can run automated
bisections once a test case exists.
--
Mel Gorman
SUSE Labs
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2014-01-20 14:46:51 | Re: ALTER TABLESPACE ... MOVE ALL TO ... |
Previous Message | Jov | 2014-01-20 14:31:51 | change alter user to be a true alias for alter role |