From: | Amit Kapila <amit(dot)kapila(at)huawei(dot)com> |
---|---|
To: | "'Heikki Linnakangas'" <hlinnakangas(at)vmware(dot)com>, <simon(at)2ndquadrant(dot)com> |
Cc: | "'Alvaro Herrera'" <alvherre(at)2ndquadrant(dot)com>, <noah(at)leadboat(dot)com>, <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Performance Improvement by reducing WAL for Update Operation |
Date: | 2013-02-01 13:06:42 |
Message-ID: | 002101ce007c$fb87d190$f29774b0$@kapila@huawei.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thursday, January 31, 2013 6:44 PM Amit Kapila wrote:
> On Wednesday, January 30, 2013 8:32 PM Amit Kapila wrote:
> > On Tuesday, January 29, 2013 7:42 PM Amit Kapila wrote:
> > > On Tuesday, January 29, 2013 3:53 PM Heikki Linnakangas wrote:
> > > > On 29.01.2013 11:58, Amit Kapila wrote:
> > > > > Can there be another way with which current patch code can be
> > made
> > > > better,
> > > > > so that we don't need to change the encoding approach, as I am
> > > having
> > > > > feeling that this might not be performance wise equally good.
> > > >
> > > > The point is that I don't want to heap_delta_encode() to know the
> > > > internals of pglz compression. You could probably make my patch
> > more
> > > > like yours in behavior by also passing an array of offsets in the
> > > > new tuple to check, and only checking for matches as those
> offsets.
> > >
> > > I think it makes sense, because if we have offsets of both new and
> > old
> > > tuple, we can internally use memcmp to compare columns and use same
> > > algorithm for encoding.
> > > I will change the patch according to this suggestion.
> >
> > I have modified the patch as per above suggestion.
> > Apart from passing new and old tuple offsets, I have passed
> > bitmaplength also, as we need to copy the bitmap of new tuple as it
> is
> > into Encoded WAL Tuple.
> >
> > Please see if such API design is okay?
> >
> > I shall update the README and send the performance/WAL Reduction data
> > for modified patch tomorrow.
>
> Updated patch including comments and README is attached with this mail.
> This patch contain exactly same design behavior as per previous.
> It takes care of API design suggestion of Heikki.
>
> The performance data is similar, as it is not complete, I shall send
> that tomorrow.
Performance data for the patch is attached with this mail.
Conclusions from the readings (these are same as my previous patch):
1. With orignal pgbench there is a max 7% WAL reduction with not much
performance difference.
2. With 250 record pgbench there is a max wal reduction of 35% with not much
performance difference.
3. With 500 and above record size in pgbench there is an improvement in the
performance and wal reduction both.
If the record size increases there is a gain in performance and wal size is
reduced as well.
Performance data for synchronous_commit = on is under progress, I shall post
it once it is done.
I am expecting it to be same as previous.
With Regards,
Amit Kapila.
Attachment | Content-Type | Size |
---|---|---|
pgbench_wal_lz_mod.htm | text/html | 71.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Pavan Deolasee | 2013-02-01 13:15:08 | Turning auto-analyze off (was Re: [GENERAL] Unusually high IO for autovacuum worker) |
Previous Message | Pavel Stehule | 2013-02-01 13:00:30 | Re: proposal: enable new error fields in plpgsql (9.4) |