From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Greg Smith <greg(at)2ndquadrant(dot)com>, Hari Babu <haribabu(dot)kommi(at)huawei(dot)com>, Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Performance Improvement by reducing WAL for Update Operation |
Date: | 2013-11-27 14:05:11 |
Message-ID: | CA+TgmobQyZHozTO-9=5zFiQnkbO57O-ONLtYcMGF-7hT3RMHeA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Nov 27, 2013 at 12:56 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> - There is a comment "TODO: It would be nice to behave like the
>> history and the source strings were concatenated, so that you could
>> compress using the new data, too." If we're not already doing that,
>> then how are we managing to compress WAL by more than one-quarter in
>> the "hundred tiny fields, all changed" case?
>
> Algorithm is not doing concatenation of history and source strings,
> the hash table is formed just with history data and then later
> if match is not found then it is added to history, so this can be the
> reason for the above result.
From the compressor's point of view, that's pretty much equivalent to
behaving as if those strings were concatenated.
The point is that there's a difference between using the old tuple's
history entries to compress the new tuple and using the new tuple's
own history to compress it. The former is delta-compression, which is
what we're supposedly doing here. The later is just plain
compression. That doesn't *necessarily* make it a bad idea, but they
are clearly two different things.
>> The basic idea is that you use a rolling hash function to divide up
>> the history data into chunks of a given average size. So we scan the
>> history data, compute a rolling hash value at each point, and each
>> time the bottom N bits are zero, we consider that to be the end of a
>> chunk. We enter all the chunks into a hash table. The chunk size
>> will vary, but on the average, given a reasonably well-behaved rolling
>> hash function (the pglz one probably doesn't qualify) it'll happen
>> every 2^N bytes, so perhaps for this purpose we'd choose N to be
>> between 3 and 5. Then, we scan the input we want to compress and
>> divide it into chunks in the same way. Chunks that don't exist in the
>> history data get copied to the output, while those that do get
>> replaced with a reference to their position in the history data.
>
> I think this idea looks better than current and it will definately
> improve some of the cases, but not sure we can win in all cases.
> We have tried one of the similar idea (reduce size of hash and
> eventually comparision) by adding every 10 bytes to the history
> lookup table rather than every byte. It improved most cases but not
> all cases ("hundred tiny fields, all changed",
> "hundred tiny fields, half changed" test were still slow).
> Patch and results are at link (refer approach-1):
> http://www.postgresql.org/message-id/001f01ce1c14$d3af0770$7b0d1650$@kapila@huawei.com
What you did there will, I think, tend to miss a lot of compression
opportunities. Suppose for example that the old tuple is
ABCDEFGHIJKLMNOP and the new tuple is xABCDEFGHIJKLMNOP. After
copying one literal byte we'll proceed to copy 9 more, missing the
fact that there was a long match available after the first byte. The
advantage of the fingerprinting technique is that it's supposed to be
resistant to that sort of thing.
> Now the tough question is what are the possible options for this patch
> and which one to pick:
> a. optimize encoding technique, so that it can improve results in most
> cases even if not all.
> b. have a table level option or guc to enable/disable WAL compression.
> c. use some heuristics to check if chances of compression are good,
> then only perform compression.
> 1. apply this optimization for tuple size > 128 and < 2000
> 2. apply this optimization if number of modified columns are less
> than 25% (some threshold number) of total columns.
> I think we can get modified columns from target entry and use
> it if triggers haven't changed that tuple. I remember
> earlier there were concerns that this value can't be trusted
> completely, but I think using it as a heuristic is not a
> problem, even if this number is not right in some cases.
> d. forget about this optimization and reject the patch.
> I think by doing option 'b' and 'c' together can make this
> optimization usable in cases where it is actually useful.
I agree that we probably want to do (b), and I suspect we want both a
GUC and a reloption, assuming that can be done relatively cleanly.
However, I think we should explore (a) more before we explore (c). I
think there's a good chance that we can reduce the CPU overhead of
this enough to feel comfortable having it enabled by default. If we
proceed with heuristics as in approach (c), I don't think that's the
end of the world, but I think there will be more corner cases where we
lose and have to fiddle things manually.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-11-27 14:05:13 | Re: [GENERAL] pg_upgrade ?deficiency |
Previous Message | Dimitri Fontaine | 2013-11-27 13:59:21 | Re: Status of FDW pushdowns |