From: | Amit Kapila <amit(dot)kapila(at)huawei(dot)com> |
---|---|
To: | "'Tom Lane'" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "'Marc Cousin'" <cousinmarc(at)gmail(dot)com> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Performance problem in PLPgSQL |
Date: | 2013-07-24 06:07:46 |
Message-ID: | 000101ce8834$1e7ba660$5b72f320$@kapila@huawei.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wednesday, July 24, 2013 4:23 AM Tom Lane wrote:
> I wrote:
> > Marc Cousin <cousinmarc(at)gmail(dot)com> writes:
> >> The example below is of course extremely simplified, and obviously
> not
> >> what we are really doing in the database, but it exhibits the
> slowdown
> >> between 9.1.9 and 9.2.4.
>
> > Hm. Some experimentation shows that the plan cache is failing to
> switch
> > to a generic plan, but I'm not sure why the cast would have anything
> to
> > do with that ...
>
> Hah, I see why:
>
> (gdb) s
> 1009 if (plansource->generic_cost < avg_custom_cost * 1.1)
> (gdb) p plansource->generic_cost
> $18 = 0.012500000000000001
> (gdb) p avg_custom_cost
> $19 = 0.01
> (gdb) p avg_custom_cost * 1.1
> $20 = 0.011000000000000001
>
> That is, the estimated cost of the custom plan is just the evaluation
> time for a simple constant, while the estimated cost of the generic
> plan
> includes a charge for evaluation of int4_numeric(). That makes the
> latter more than ten percent more expensive, and since this logic isn't
> considering anything else at all (particularly not the cost of
> planning), it thinks that makes the custom plan worth picking.
>
> We've been around on this before, but not yet thought of a reasonable
> way to estimate planning cost, let alone compare it to estimated
> execution costs. Up to now I hadn't thought that was a particularly
> urgent issue, but this example shows it's worth worrying about.
>
> One thing that was suggested in the previous discussion is to base the
> planning cost estimate on the length of the rangetable. We could
> do something trivial like add "10 * (length(plan->rangetable) + 1)"
> in this comparison.
>
> Another thing that would fix this particular issue, though perhaps not
> related ones, is to start charging something nonzero for ModifyTable
> nodes, say along the lines of one seq_page_cost per inserted/modified
> row. That would knock the total estimated cost for an INSERT up enough
> that the ten percent threshold wouldn't be exceeded.
Shouldn't it consider new value of boundparam to decide whether a new custom
plan is needed,
as that can be one of the main reason why it would need different plan.
Current behavior is either it will choose generic plan or build a new custom
plan with new parameters based on
Choose_custom_plan().
Shouldn't the behavior of this be as below:
a. choose generic plan
b. choose one of existing custom plan
c. create new custom plan
The choice can be made based on the new value of bound parameter.
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Antonin Houska | 2013-07-24 07:20:52 | Backup throttling |
Previous Message | Amit Langote | 2013-07-24 05:56:31 | Re: maintenance_work_mem and CREATE INDEX time |