From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: DBT-3 with SF=20 got failed |
Date: | 2015-09-24 17:04:55 |
Message-ID: | 12848.1443114295@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
> But what about computing the number of expected batches, but always
> start executing assuming no batching? And only if we actually fill
> work_mem, we start batching and use the expected number of batches?
Hmm. You would likely be doing the initial data load with a "too small"
numbuckets for single-batch behavior, but if you successfully loaded all
the data then you could resize the table at little penalty. So yeah,
that sounds like a promising approach for cases where the initial rowcount
estimate is far above reality.
But I kinda thought we did this already, actually.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2015-09-24 17:08:16 | Re: No Issue Tracker - Say it Ain't So! |
Previous Message | Fujii Masao | 2015-09-24 17:03:42 | Re: [PROPOSAL] VACUUM Progress Checker. |