From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Greg Smith <greg(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Ants Aasma <ants(at)cybertec(dot)at>, Florian Pflug <fgp(at)phlo(dot)org>, Bruce Momjian <bruce(at)momjian(dot)us>, Jeff Davis <pgsql(at)j-davis(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Enabling Checksums |
Date: | 2013-04-22 15:33:03 |
Message-ID: | 20130422153303.GH4052@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2013-04-22 11:27:25 -0400, Robert Haas wrote:
> On Wed, Apr 17, 2013 at 8:21 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> >> The more I read of this thread, the more unhappy I get. It appears that
> >> the entire design process is being driven by micro-optimization for CPUs
> >> being built by Intel in 2013.
> >
> > And that's not going to get anyone past review, since all the tests I've
> > been doing the last two weeks are on how fast an AMD Opteron 6234 with OS
> > cache >> shared_buffers can run this. The main thing I'm still worried
> > about is what happens when you have a fast machine that can move memory
> > around very quickly and an in-memory workload, but it's hamstrung by the
> > checksum computation--and it's not a 2013 Intel machine.
>
> This is a good point. However, I don't completely agree with the
> conclusion that we shouldn't be worrying about any of this right now.
> While I agree with Tom that it's far too late to think about any
> CPU-specific optimizations for 9.3, I have a lot of concern, based on
> Ants's numbers, that we've picked a checksum algorithm which is hard
> to optimize for performance. If we don't get that fixed for 9.3,
> we're potentially looking at inflicting many years of serious
> suffering on our user base. If we at least get the *algorithm* right
> now, we can worry about optimizing it later. If we get it wrong,
> we'll be living with the consequence of that for a really long time.
>
> I wish that we had not scheduled beta quite so soon, as I am sure
> there will be even more resistance to changing this after beta. But
> I'm having a hard time escaping the conclusion that we're on the edge
> of shipping something we will later regret quite deeply. Maybe I'm
> wrong?
I don't see us changing away from CRCs anymore either by now. But I
think at least changing the polynom to something that
a) has higher error detection properties
b) can noticeably sped up on a a good part of the hardware pg is run on
If we are feeling really adventurous we can switch to a faster CRC
implementation, there are enough ones around and I know that at least my
proposed patch from some years ago (which is by far not the fastest that
is doable) is in production usage some places.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2013-04-22 15:44:00 | Re: Fast promotion, loose ends |
Previous Message | Robert Haas | 2013-04-22 15:27:25 | Re: Enabling Checksums |