From: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, "kuroda(dot)hayato(at)fujitsu(dot)com" <kuroda(dot)hayato(at)fujitsu(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ResourceOwner refactoring |
Date: | 2021-01-21 10:14:10 |
Message-ID: | ad52c002-821c-e0e3-b406-9a0ab72a07bf@iki.fi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 21/01/2021 06:14, Michael Paquier wrote:
> On Thu, Jan 21, 2021 at 12:11:37AM +0200, Heikki Linnakangas wrote:
>> Summary: In the the worst scenario, the patched version is still 24% slower
>> than unpatched. But many other scenarios are now faster with the patch.
>
> Is there a reason explaining the sudden drop for numsnaps within the
> [10,60] range? The gap looks deeper with a low numkeep.
It's the switch from array to hash table. With the patch, the array
holds 8 entries. Without the patch, it's 64 entries. So you see a drop
around those points. I added more data points in that range to get a
better picture:
LIFO tests:
numkeep | numsnaps | unpatched | patched | ratio
---------+----------+-----------+---------+-------
0 | 1 | 32.8 | 32.9 | 1.00
0 | 2 | 31.6 | 32.8 | 1.04
0 | 4 | 32.7 | 32.0 | 0.98
0 | 6 | 34.1 | 33.9 | 0.99
0 | 8 | 35.1 | 32.4 | 0.92
0 | 10 | 34.0 | 37.1 | 1.09
0 | 15 | 33.1 | 35.9 | 1.08
0 | 20 | 33.0 | 38.8 | 1.18
0 | 25 | 32.9 | 42.3 | 1.29
0 | 30 | 32.9 | 40.5 | 1.23
0 | 35 | 33.1 | 39.9 | 1.21
0 | 40 | 33.0 | 39.0 | 1.18
0 | 45 | 35.3 | 41.1 | 1.16
0 | 50 | 33.0 | 40.8 | 1.24
0 | 55 | 32.8 | 40.6 | 1.24
0 | 58 | 33.0 | 41.5 | 1.26
0 | 60 | 33.1 | 41.6 | 1.26
0 | 62 | 32.8 | 41.7 | 1.27
0 | 64 | 46.8 | 40.9 | 0.87
0 | 66 | 47.0 | 42.5 | 0.90
0 | 68 | 47.1 | 41.8 | 0.89
0 | 70 | 47.8 | 41.7 | 0.87
(22 rows)
FIFO tests:
numkeep | numsnaps | unpatched | patched | ratio
---------+----------+-----------+---------+-------
0 | 1 | 32.3 | 32.1 | 0.99
0 | 2 | 33.4 | 31.6 | 0.95
0 | 4 | 34.0 | 31.4 | 0.92
0 | 6 | 35.4 | 33.2 | 0.94
0 | 8 | 34.8 | 31.9 | 0.92
0 | 10 | 35.4 | 40.2 | 1.14
0 | 15 | 35.4 | 40.3 | 1.14
0 | 20 | 35.6 | 43.8 | 1.23
0 | 25 | 35.4 | 42.4 | 1.20
0 | 30 | 36.0 | 43.3 | 1.20
0 | 35 | 36.4 | 45.1 | 1.24
0 | 40 | 36.9 | 46.6 | 1.26
0 | 45 | 37.7 | 45.3 | 1.20
0 | 50 | 37.2 | 43.9 | 1.18
0 | 55 | 38.4 | 46.8 | 1.22
0 | 58 | 37.6 | 45.0 | 1.20
0 | 60 | 37.7 | 46.6 | 1.24
0 | 62 | 38.4 | 46.5 | 1.21
0 | 64 | 48.7 | 47.6 | 0.98
0 | 66 | 48.2 | 45.8 | 0.95
0 | 68 | 48.5 | 47.5 | 0.98
0 | 70 | 48.4 | 47.3 | 0.98
(22 rows)
Let's recap the behavior:
Without patch
-------------
For each different kind of resource, there's an array that holds up to
64 entries. In ResourceOwnerForget(), the array is scanned linearly. If
the array fills up, we replace the array with a hash table. After
switching, all operations use the hash table.
With patch
----------
There is one array that holds up to 8 entries. It is shared by all
resources. In ResourceOwnerForget(), the array is always scanned.
If the array fills up, all the entries are moved to a hash table,
freeing up space in the array, and the new entry is added to the array.
So the array is used together with the hash table, like an LRU cache of
the most recently remembered entries.
Why this change? I was afraid that now that all different resources
share the same data structure, remembering e.g. a lot of locks at the
beginning of a transaction would cause the switch to the hash table,
making all subsequent remember/forget operations, like for buffer pins,
slower. That kind of interference seems bad. By continuing to use the
array for the recently-remembered entries, we avoid that problem. The
[numkeep, numsnaps] = [65, 70] test is in that regime, and the patched
version was significantly faster.
Because the array is now always scanned, I felt that it needs to be
small, to avoid wasting much time scanning for entries that have already
been moved to the hash table. That's why I made it just 8 entries.
Perhaps 8 entries is too small, after all? Linearly scanning an array is
very fast. To test that, I bumped up RESOWNER_ARRAY_SIZE to 64, and ran
the test again:
numkeep | numsnaps | lifo_time_ns | fifo_time_ns
---------+----------+--------------+--------------
0 | 1 | 35.4 | 31.5
0 | 2 | 32.3 | 32.3
0 | 4 | 32.8 | 31.0
0 | 6 | 34.5 | 33.7
0 | 8 | 33.9 | 32.7
0 | 10 | 33.7 | 33.0
0 | 15 | 34.8 | 33.1
0 | 20 | 35.0 | 32.6
0 | 25 | 36.9 | 33.0
0 | 30 | 38.7 | 33.2
0 | 35 | 39.9 | 34.5
0 | 40 | 40.8 | 35.5
0 | 45 | 42.6 | 36.4
0 | 50 | 44.9 | 37.8
0 | 55 | 45.4 | 38.5
0 | 58 | 47.7 | 39.6
0 | 60 | 45.9 | 40.2
0 | 62 | 47.6 | 40.9
0 | 64 | 51.4 | 41.2
0 | 66 | 38.2 | 39.8
0 | 68 | 39.7 | 40.3
0 | 70 | 38.1 | 40.9
(22 rows)
Here you can see that as numsnaps increases, the test becomes slower,
but then it becomes faster again at 64-66, when it switches to the hash
table. So 64 seems too much. 32 seems to be the sweet spot here, that's
where scanning the hash and scanning the array are about the same speed.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2021-01-21 10:17:23 | Re: Single transaction in the tablesync worker? |
Previous Message | Bharath Rupireddy | 2021-01-21 09:59:13 | Re: Is Recovery actually paused? |