From: | Christoph Berg <myon(at)debian(dot)org> |
---|---|
To: | Bernd Helmle <mailings(at)oopsware(dot)de> |
Cc: | Michael Paquier <michael(at)paquier(dot)xyz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Enable data checksums by default |
Date: | 2019-03-29 19:35:26 |
Message-ID: | 20190329193526.GB19154@msg.df7cb.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Re: Bernd Helmle 2019-03-29 <3586bb9345a59bfc8d13a50a7c729be1ee6759fd(dot)camel(at)oopsware(dot)de>
> Am Freitag, den 29.03.2019, 23:10 +0900 schrieb Michael Paquier:
> >
> > I can't really believe that many people set up shared_buffers at
> > 128kB
> > which would cause such a large number of page evictions, but I can
> > believe that many users have shared_buffers set to its default value
> > and that we are going to get complains about "performance drop after
> > upgrade to v12" if we switch data checksums to on by default.
>
> Yeah, i think Christoph's benchmark is based on this thinking. I assume
> this very unrealistic scenery should emulate the worst case (many
> buffer_reads, high checksum calculation load).
It's not unrealistic to have large seqscans that are all buffer
misses, the table just has to be big enough. The idea in my benchmark
was that if I make shared buffers really small, and the table still
fits in to RAM, I should be seeing only buffer misses, but without any
delay for actually reading from disk.
Christoph
From | Date | Subject | |
---|---|---|---|
Next Message | Christoph Berg | 2019-03-29 19:38:54 | Re: PostgreSQL pollutes the file system |
Previous Message | Joe Conway | 2019-03-29 19:32:57 | Re: PostgreSQL pollutes the file system |