From: | Paul Guo <pguo(at)pivotal(dot)io> |
---|---|
To: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
Cc: | PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Taylor Vesely <tvesely(at)pivotal(dot)io> |
Subject: | Re: Batch insert in CTAS/MatView code |
Date: | 2019-09-09 10:31:54 |
Message-ID: | CAEET0ZG31mD5SWjTYsAt0JTLReOejPvusJorZ3kGZ1=N1AC-Fw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Aug 2, 2019 at 2:55 AM Heikki Linnakangas <hlinnaka(at)iki(dot)fi> wrote:
> On 17/06/2019 15:53, Paul Guo wrote:
> > I noticed that to do batch insert, we might need additional memory copy
> > sometimes comparing with "single insert"
> > (that should be the reason that we previously saw a bit regressions) so a
> > good solution seems to fall back
> > to "single insert" if the tuple length is larger than a threshold. I set
> > this as 2000 after quick testing.
>
> Where does the additional memory copy come from? Can we avoid doing it
> in the multi-insert case?
Hi Heikki,
Sorry for the late reply. I took some time on looking at & debugging the
code of TupleTableSlotOps
of various TupleTableSlot types carefully, especially the
BufferHeapTupleTableSlot case on which
we seemed to see regression if no threshold is set, also debugging &
testing more of the CTAS case.
I found my previous word "additional memory copy" (mainly tuple content
copy against single insert)
is wrong based on the latest code (probably is wrong also with previous
code). So in theory
we should not worry about additional tuple copy overhead now, and then I
tried the patch without setting
multi-insert threshold as attached.
To make test results more stable, this time I run a simple ' select
count(*) from tbl' before each CTAS to
warm up the shared buffer, run checkpoint before each CTAS, disable
autovacuum by setting
'autovacuum = off', set larger shared buffers (but < 25% of total memory
which is recommended
by PG doc) so that CTAS all hits shared buffer read if there exists warm
buffers (double-checked via
explain(analyze, buffers)). These seem to be reasonable for performance
testing. Each kind of CTAS
testing is run three times (Note before each run we do warm up and
checkpoint as mentioned).
I mainly tested the t12 (normal table with tuple size ~ 2k) case since for
others our patch either
performs better or similarly.
Patch: 1st_run 2nd_run 3rd_run
t12_BufferHeapTuple 7883.400 7549.966 8090.080
t12_Virtual 8041.637 8191.317 8182.404
Baseline: 1st_run 2nd_run 3rd_run
t12_BufferHeapTuple: 8264.290 7508.410 7681.702
t12_Virtual 8167.792 7970.537 8106.874
I actually roughly tested other tables we mentioned also (t11 and t14) -
the test results and conclusions are same.
t12_BufferHeapTuple means: create table tt as select * from t12;
t12_Virtual means: create table tt as select *partial columns* from t12;
So it looks like for t12 the results between our code and baseline are
similar so not setting
threshoud seem to be good though it looks like t12_BufferHeapTuple test
results varies a
lot (at most 0.5 seconds) for both our patch and baseline vs the virtual
case which is quite stable.
This actually confused me a bit given we've cached the source table in
shared buffers. I suspected checkpoint affects,
so I disabled checkpoint by setting max_wal_size = 3000 during CTAS, the
BufferHeapTuple case (see below)
still varies some. I'm not sure what's the reason but this does not seem to
a be blocker for the patch.
Patch: 1st_run 2nd_run 3rd_run
t12_BufferHeapTuple 7717.304 7413.259 7452.773
t12_Virtual 7445.742 7483.148 7593.583
Baseline: 1st_run 2nd_run 3rd_run
t12_BufferHeapTuple 8186.302 7736.541 7759.056
t12_Virtual 8004.880 8096.712 7961.483
Attachment | Content-Type | Size |
---|---|---|
v4-0001-Multi-insert-in-Create-Table-As.patch | application/octet-stream | 9.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Khandekar | 2019-09-09 10:36:58 | Re: Minimal logical decoding on standbys |
Previous Message | Antonin Houska | 2019-09-09 10:20:30 | Re: Attempt to consolidate reading of XLOG page |