From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
Cc: | Peter Eisentraut <peter_e(at)gmx(dot)net>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Maximum statistics target |
Date: | 2008-03-07 21:48:45 |
Message-ID: | 28138.1204926525@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Martijn van Oosterhout <kleptog(at)svana(dot)org> writes:
> On Fri, Mar 07, 2008 at 07:25:25PM +0100, Peter Eisentraut wrote:
>> What's the problem with setting it to ten million if I have ten million values
>> in the table and I am prepared to spend the resources to maintain those
>> statistics?
> That it'll probably take 10 million seconds to calculate the plans
> using it? I think Tom pointed there are a few places that are O(n^2)
> the number entries...
I'm not wedded to the number 1000 in particular --- obviously that's
just a round number. But it would be good to see some performance tests
with larger settings before deciding that we don't need a limit.
IIRC, egjoinsel is one of the weak spots, so tests involving planning of
joins between two tables with large MCV lists would be a good place to
start.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-03-07 21:55:17 | Re: Re: [COMMITTERS] pgsql: Add: > o Add SQLSTATE severity to PGconn return status > > |
Previous Message | Tom Lane | 2008-03-07 21:45:46 | Re: Commitfest process |