Re: Bugs in our qsort implementation

From: Peter Geoghegan <pg(at)heroku(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bugs in our qsort implementation
Date: 2015-07-17 00:18:34
Message-ID: CAM3SWZRAvj-8BttNFb4_jUnta3runoZjrGnOeeo3LTKuQF=EqA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jul 16, 2015 at 5:05 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> It's possible that this issue can only manifest on 9.4 and up where
> we have the ability for tuplesort to allocate work arrays approaching
> INT_MAX elements. But I don't have a lot of faith in that; I think the
> worst-case stack depth for the way we have it now could be as bad as O(N),
> so in principle a crash could be possible with significantly smaller input
> arrays. I think we'd better back-patch this all the way.

+1.

If you want to generate a worst case, McIlroy wrote a program that
will generate one [1]. AFAIR, it will generate a series of
self-consistent comparisons in the "gas" comparator that produce a
worst-case outcome (as opposed to producing a simple worse-case input,
which would be more convenient in this kind of scenario). This is
known specifically to affect the Bentley & McIlroy implementation, as
the paper goes into.

[1] http://www.cs.dartmouth.edu/~doug/mdmspe.pdf
--
Peter Geoghegan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2015-07-17 00:23:18 Re: [PATCH] postgres_fdw extension support
Previous Message Tom Lane 2015-07-17 00:05:31 Bugs in our qsort implementation