From: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Jaime Casanova <jaime(dot)casanova(at)2ndquadrant(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Patch: Write Amplification Reduction Method (WARM) |
Date: | 2017-04-05 18:32:41 |
Message-ID: | CABOikdOTstHK2y0rDk+Y3Wx9HRe+bZtj3zuYGU=VngneiHo5KQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Apr 5, 2017 at 7:06 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Apr 4, 2017 at 11:43 PM, Pavan Deolasee
> <pavan(dot)deolasee(at)gmail(dot)com> wrote:
> > Well, better than causing a deadlock ;-)
>
> Yep.
>
> > Lets see if we want to go down the path of blocking WARM when tuples have
> > toasted attributes. I submitted a patch yesterday, but having slept over
> it,
> > I think I made mistakes there. It might not be enough to look at the
> caller
> > supplied new tuple because that may not have any toasted values, but the
> > final tuple that gets written to the heap may be toasted.
>
> Yes, you have to make whatever decision you're going to make here
> after any toast-ing has been done.
>
I am worried that might add more work in that code path since we then have
to fetch attributes for the new tuple as well. May be a good compromise
would be to still only check on the user supplied new tuple, but be
prepared to handle toasted values during recheck. The attached version does
that.
>
> Well, I think that there's some danger of whittling down this
> optimization to the point where it still incurs most of the costs --
> in bit-space if not in CPU cycles -- but no longer yields much of the
> benefit. Even though the speed-up might still be substantial in the
> cases where the optimization kicks in, if a substantial number of
> users doing things that are basically pretty normal sometimes fail to
> get the optimization, this isn't going to be very exciting outside of
> synthetic benchmarks.
>
I agree. Blocking WARM off for too many cases won't serve the purpose.
>
> Backing up a little bit, it seems like the root of the issue here is
> that, at a certain point in what was once a HOT chain, you make a WARM
> update, and you make a decision about which indexes to update at that
> point. Now, later on, when you traverse that chain, you need to be
> able to figure what decide you made before; otherwise, you might make
> a bad decision about whether an index pointer applies to a particular
> tuple. If the index tuple is WARM, then the answer is "yes" if the
> heap tuple is also WARM, and "no" if the heap tuple is CLEAR (which is
> an odd antonym to WARM, but leave that aside). If the index tuple is
> CLEAR, then the answer is "yes" if the heap tuple is also CLEAR, and
> "maybe" if the heap tuple is WARM.
>
That's fairly accurate description of the problem.
>
> The first idea I had for an actual solution to this problem was to
> make the decision as to whether to insert new index entries based on
> whether the indexed attributes in the final tuple (post-TOAST) are
> byte-for-byte identical with the original tuple. If somebody injects
> a new compression algorithm into the system, or just changes the
> storage parameters on a column, or we re-insert an identical value
> into the TOAST table when we could have reused the old TOAST pointer,
> then you might have some potentially-WARM updates that end up being
> done as regular updates, but that's OK. When you are walking the
> chain, you will KNOW whether you inserted new index entries or not,
> because you can do the exact same comparison that was done before and
> be sure of getting the same answer. But that's actually not really a
> solution, because it doesn't work if all of the CLEAR tuples are gone
> -- all you have is the index tuple and the new heap tuple; there's no
> old heap tuple with which to compare.
>
Right. The old/new tuples may get HOT pruned and hence we cannot rely on
any algorithm which assumes that we can compare old and new tuples after
the update is committed/aborted.
>
> The only other idea that I have for a really clean solution here is to
> support this only for index types that are amcanreturn, and actually
> compare the value stored in the index tuple with the one stored in the
> heap tuple, ensuring that new index tuples are inserted whenever they
> don't match and then using the exact same test to determine the
> applicability of a given index pointer to a given heap tuple.
Just so that I understand, are you suggesting that while inserting WARM
index pointers, we check if the new index tuple will look exactly the same
as the old index tuple and not insert a duplicate pointer at all? I
considered that, but it will require us to do an index lookup during WARM
index insert and for non-unique keys, that may or may not be exactly cheap.
Or we need something like what Claudio wrote to sort all index entries by
heap TIDs. If we do that, then the recheck can be done just based on the
index and heap flags (because we can then turn the old index pointer into a
CLEAR pointer. Index pointer is set to COMMON during initial insert).
The other way is to pass old tuple values along with the new tuple values
to amwarminsert, build index tuples and then do a comparison. For duplicate
index tuples, skip WARM inserts.
>
> By the way, the "Converting WARM chains back to HOT chains" section of
> README.WARM seems to be out of date. Any chance you could update that
> to reflect the current state and thinking of the patch?
>
>
Ok. I've extensively updated the README to match the current state of
affairs. Updated patch set attached. I've also added mechanism to deal with
known-dead pointers during regular index scans. We can derive some
knowledge from index/heap states and recheck results. One additional thing
I did which should help Dilip's test case is that we use the index/heap
state to decide whether a recheck is necessary or not. And when we see a
CLEAR pointer to all-WARM tuples, we set the pointer WARM and thus avoid
repeated recheck for the same tuple. My own tests show that the regression
should go away with this version, but I am not suggesting that we can't
come up with some other workload where we still see regression.
I also realised that altering table-level enable_warm reloption would
require AccessExclusiveLock. So included that change too.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachment | Content-Type | Size |
---|---|---|
0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v26.patch | application/octet-stream | 4.2 KB |
0003-Main-WARM-patch_v26.patch | application/octet-stream | 317.3 KB |
0001-Track-root-line-pointer-v23_v26.patch | application/octet-stream | 39.4 KB |
0004-Provide-control-knobs-to-decide-when-to-do-heap-_v26.patch | application/octet-stream | 69.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2017-04-05 18:50:52 | Re: Patch: Write Amplification Reduction Method (WARM) |
Previous Message | Fabien COELHO | 2017-04-05 18:28:46 | Re: pgbench - allow to store select results into variables |