From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: DBT-3 with SF=20 got failed |
Date: | 2015-09-24 13:49:16 |
Message-ID: | 5603FF5C.4070107@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 09/24/2015 01:51 PM, Robert Haas wrote:
> On Thu, Sep 24, 2015 at 5:50 AM, Tomas Vondra
> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> I however quite dislike the dismissal of the possible impact. It should be
>> the responsibility of the person introducing the change to show that no such
>> impact actually exists, not just waving it off as "unbased on any evidence"
>> when there's no evidence presented.
>
> So, we're talking about determining the behavior in a case that
> currently fails. Making it behave like a case that currently works
> can't but be an improvement. Making it do something that currently
> never happens might be better still, or it might be equivalent, or
> it might be worse. I just don't buy the argument that somebody's got
> to justify on performance grounds a decision not to allocate more
> memory than we currently ever allocate. That seems 100% backwards to
> me.
Yes, it's true that if you hit the issue it fails, so I understand your
view that it's a win to fix this by introducing the (arbitrary) limit. I
disagree with this view because the limit changes at the limit - if you
get a good estimate just below the limit, you get no resize, if you get
slightly higher estimate you get resize.
So while it does not introduce behavior change in this particular case
(because it fails, as you point out), it introduces a behavior change in
general - it simply triggers behavior that does not happen below the
limit. Would we accept the change if the proposed limit was 256MB, for
example?
It also seems to me that we don't really need the hash table until after
MultiExecHash(), so maybe building the hash table incrementally is just
unnecessary and we could simply track the optimal number of buckets and
build the buckets at the end of MultiExecHash (essentially at the place
where we do the resize now). We'd have to walk the tuples and insert
them into the buckets, but that seems more efficient than the
incremental build (no data to support that at this point).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2015-09-24 13:50:05 | Re: [COMMITTERS] pgsql: Add pages deleted from pending list to FSM |
Previous Message | Robert Haas | 2015-09-24 13:22:35 | Re: No Issue Tracker - Say it Ain't So! |