From: | Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> |
Subject: | Re: [HACKERS] UPDATE of partition key |
Date: | 2018-01-12 17:23:55 |
Message-ID: | CAJ3gD9fjFYWOLn874c=Fe_g5QGuJ89_cKLiRZMm3c9z8jtksLw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12 January 2018 at 20:24, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Fri, Jan 12, 2018 at 5:12 AM, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com> wrote:
>> The reason why I am having map_required field inside a structure along
>> with the map, as against a separate array, is so that we can do the
>> on-demand allocation for both per-leaf array and per-subplan array.
>
> Putting the map_required field inside the structure with the map makes
> it completely silly to do the 0/1/2 thing, because the whole structure
> is going to be on the same cache line anyway. It won't save anything
> to access the flag instead of a pointer in the same struct.
I see. Got it.
> Also,
> the uint8 will be followed by 7 bytes of padding, because the pointer
> that follows will need to begin on an 8-byte boundary (at least, on
> 64-bit machines), so this will use more memory.
>
> What I suggest is:
>
> #define MT_CONVERSION_REQUIRED_UNKNOWN 0
> #define MT_CONVERSION_REQUIRED_YES 1
> #define MT_CONVERSION_REQUIRED_NO 2
>
> In ModifyTableState:
>
> uint8 *mt_per_leaf_tupconv_required;
> TupleConversionMap **mt_per_leaf_tupconv_maps;
>
> In PartitionTupleRouting:
>
> int *subplan_partition_offsets;
>
> When you initialize the ModifyTableState, do this:
>
> mtstate->mt_per_leaf_tupconv_required = palloc0(sizeof(uint8) *
> numResultRelInfos);
> mtstate->mt_per_leaf_tupconv_maps = palloc0(sizeof(TupleConversionMap
> *) * numResultRelInfos);
>
A few points below where I wanted to confirm that we are on the same page ...
> When somebody needs a map, then
>
> (1) if they need it by subplan index, first use
> subplan_partition_offsets to convert it to a per-leaf index
Before that, we need to check if there *is* an offset array. If there
are no partitions, there is only going to be a per-subplan array,
there won't be an offsets array. But I guess, you are saying : "do the
on-demand allocation only for leaf partitions; if there are no
partitions, the per-subplan maps will always be allocated for each of
the subplans from the beginning" . So if there is no offset array,
just return mtstate->mt_per_subplan_tupconv_maps[subplan_index]
without any further checks.
>
> (2) then write a function that takes the per-leaf index and does this:
>
> switch (mtstate->mt_per_leaf_tupconv_required[leaf_part_index])
> {
> case MT_CONVERSION_REQUIRED_UNKNOWN:
> map = convert_tuples_by_name(...);
> if (map == NULL)
> mtstate->mt_per_leaf_tupconv_required[leaf_part_index] =
> MT_CONVERSION_REQUIRED_NO;
> else
> {
> mtstate->mt_per_leaf_tupconv_required[leaf_part_index] =
> MT_CONVERSION_REQUIRED_YES;
> mtstate->mt_per_leaf_tupconv_maps[leaf_part_index] = map;
> }
> return map;
> case MT_CONVERSION_REQUIRED_YES:
> return mtstate->mt_per_leaf_tupconv_maps[leaf_part_index];
> case MT_CONVERSION_REQUIRED_NO:
> return NULL;
> }
Yeah, right.
But after that, I am not sure then why is mt_per_sub_plan_maps[] array
needed ? We are always going to convert the subplan index into leaf
index, so per-subplan map array will not come into picture. Or are you
saying, it will be allocated and used only when there are no
partitions ? From one of your earlier replies, you did mention about
trying to share the maps between the two arrays, that means you were
considering both arrays being used at the same time.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2018-01-12 17:24:39 | Re: WIP: a way forward on bootstrap data |
Previous Message | Luke Cowell | 2018-01-12 17:03:54 | Re: Possible performance regression with pg_dump of a large number of relations |