| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Noah Misch <noah(at)leadboat(dot)com> |
| Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org, Andres Freund <andres(at)anarazel(dot)de> |
| Subject: | Re: *_collapse_limit, geqo_threshold |
| Date: | 2009-07-09 04:06:29 |
| Message-ID: | 19649.1247112389@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Noah Misch <noah(at)leadboat(dot)com> writes:
> Describing in those terms illuminates much. While the concepts do suggest 2^N
> worst-case planning cost, my artificial test case showed a rigid 4^N pattern;
> what could explain that?
Well, the point of the 2^N concept is just that adding one more relation
multiplies the planning work by a constant factor. It's useful data
that you find the factor to be about 4, but I wouldn't have expected the
model to tell us that.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Greg Smith | 2009-07-09 04:08:17 | Re: multi-threaded pgbench |
| Previous Message | Noah Misch | 2009-07-09 03:38:38 | Re: *_collapse_limit, geqo_threshold |