From: | Pierre Ducroquet <p(dot)psql(at)pinaraf(dot)info> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | [PATCH] LLVM tuple deforming improvements |
Date: | 2018-07-13 08:20:42 |
Message-ID: | 2033438.sjni0gc1MV@peanuts2 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi
As reported in the «effect of JIT tuple deform?» thread, there are for some
cases slowdowns when using JIT tuple deforming.
I've played with the generated code and with the LLVM optimizer trying to fix
that issue, here are the results of my experiments, with the corresponding
patches.
All performance measurements are done following the test from
https://www.postgresql.org/message-id/CAFj8pRAOcSXNnykfH=M6mNaHo
+g=FaUs=DLDZsOHdJbKujRFSg(at)mail(dot)gmail(dot)com
Base measurements :
No JIT : 850ms
JIT without tuple deforming : 820 ms (0.2ms optimizing)
JIT with tuple deforming, no opt : 1650 ms (1.5ms)
JIT with tuple deforming, -O3 : 770 ms (105ms)
1) force a -O1 when deforming
This is by far the best I managed to get. With -O1, the queries are even
faster than with -O3 since the optimizer is faster, while generating an
already efficient code.
I have tried adding the right passes to the passmanager, but it looks like the
interesting ones are not available unless you enable -O1.
JIT with tuple deforming, -O1 : 725 ms (54ms)
2) improve the LLVM IR code
The code generator in llvmjit-deform.c currently rely on the LLVM optimizer to
do the right thing. For instance, it can generate a lot of empty blocks with
only a jump. If we don't want to enable the LLVM optimizer for every code, we
have to get rid of this kind of pattern. The attached patch does that. When
the optimizer is not used, this gets a few cycles boost, nothing impressive.
I have tried to go closer to the optimized bitcode, but it requires building
phi nodes manually instead of using alloca, and this isn't enough to bring us
to the performance level of -O1.
JIT with tuple deforming, no opt : 1560 ms (1.5ms)
3) *experimental* : faster non-NULL handling
Currently, the generated code always look at the tuple header bitfield to
check each field null-ness, using afterwards an and against the hasnulls bit.
Checking only for hasnulls improves performance when there are mostly null-
less tuples, but taxes the performance when nulls are found.
I have not yet suceeded in implementing it, but I think that using the
statistics collected for a given table, we could use that when we know that we
may benefit from it.
JIT with tuple deforming, no opt : 1520 ms (1.5ms)
JIT with tuple deforming, -O1 : 690 ms (54ms)
Attachment | Content-Type | Size |
---|---|---|
0001-Check-for-the-hasnulls-attribute-before-checking-ind.patch | text/x-patch | 3.6 KB |
0001-Introduce-opt1-in-LLVM-JIT-and-force-it-with-deformi.patch | text/x-patch | 3.9 KB |
0001-Skip-alignment-code-blocks-when-they-are-not-needed.patch | text/x-patch | 4.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2018-07-13 08:20:47 | Re: Speeding up INSERTs and UPDATEs to partitioned tables |
Previous Message | Yotsunaga, Naoki | 2018-07-13 08:16:00 | RE: automatic restore point |