From: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com> |
Cc: | Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, Amit Langote <amitlangote09(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Subject: | Re: partitioning - changing a slot's descriptor is expensive |
Date: | 2018-06-29 07:05:28 |
Message-ID: | 0f67a88a-6fb8-aba6-5228-8c64c2887fef@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2018/06/29 15:23, Amit Langote wrote:
> Instead of a single TupleTableSlot attached at partition_tuple_slot, we
> allocate an array of TupleTableSlot pointers of same length as the number
> of partitions, as you mentioned upthread. We then call
> MakeTupleTableSlot() only if a partition needs it and pass it the
> partition's TupleDesc. Allocated slots are remembered in a list.
> ExecDropSingleTupleTableSlot is then called on those allocated slots when
> the plan ends. Note that the array of slots is not allocated if none of
> the partitions affected by a given query (or copy) needed to convert tuples.
Forgot to show effect the patch has (on my workstation, so numbers a bit
noisy).
create table p (a int, b int, c int) partition by range (a);
create table p1 (b int, a int, c int);
alter table p attach partition p1 for values from (1) to (maxvalue);
Note that every row will end up into p1 and will require conversion.
-- 2 million records
copy (select i, i+1, i+2 from generate_series(1, 2000000) i) to
'/tmp/data.csv' csv;
Un-patched:
truncate p;
copy p from '/tmp/data.csv' csv;
COPY 2000000
Time: 8521.308 ms (00:08.521)
truncate p;
copy p from '/tmp/data.csv' csv;
COPY 2000000
Time: 8160.741 ms (00:08.161)
truncate p;
copy p from '/tmp/data.csv' csv;
COPY 2000000
Time: 8389.925 ms (00:08.390)
Patched:
truncate p;
copy p from '/tmp/data.csv' csv;
COPY 2000000
Time: 7716.568 ms (00:07.717)
truncate p;
copy p from '/tmp/data.csv' csv;
COPY 2000000
Time: 7569.224 ms (00:07.569)
truncate p;
copy p from '/tmp/data.csv' csv;
COPY 2000000
Time: 7572.085 ms (00:07.572)
So, there is at least some speedup.
Thanks,
Amit
From | Date | Subject | |
---|---|---|---|
Next Message | Yugo Nagata | 2018-06-29 07:14:15 | Re: CREATE TABLE .. LIKE .. EXCLUDING documentation |
Previous Message | Kuntal Ghosh | 2018-06-29 06:39:49 | Re: [WIP] [B-Tree] Retail IndexTuple deletion |