Re: bad JIT decision

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Scott Ribe <scott_ribe(at)elevated-dev(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: bad JIT decision
Date: 2020-07-28 19:47:36
Message-ID: 1236773.1595965656@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Andres Freund <andres(at)anarazel(dot)de> writes:
> On 2020-07-27 19:02:56 -0400, Alvaro Herrera wrote:
>>> I don't quite understand why is it that a table with 1000 partitions
>>> means that JIT compiles the thing 1000 times. Sure, it is possible that
>>> some partitions have a different column layout, but it seems an easy bet
>>> that most cases are going to have identical column layout, and so tuple
>>> deforming can be shared.

> No, that's not what happens. The issue rather is that at execution time
> there's simply nothing tying the partitioned parts of the query together
> from the executor POV. Each table scan gets its own expressions to
> evaluate quals etc. That's not a JIT specific thing, it's general.

I think what Alvaro is imagining is caching the results of compiling
tuple-deforming. You could hash on the basis of all the parts of the
tupdesc that the deforming compiler cares about, and then share the
compiled code across different relations with similar tupdescs.
That could win for lots-o-partitions cases, and it could win across
successive queries on the same relation, too.

Maybe the same principle could be applied to compiled expressions,
but it's less obvious that you'd get enough matches to win.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Shaozhong SHI 2020-07-28 20:51:17 Re: Issues of slow running queries when dealing with Big Data
Previous Message Gavin Flower 2020-07-28 19:35:58 Re: Issues of slow running queries when dealing with Big Data