From: | Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com> |
---|---|
To: | Cédric Villemain <Cedric(dot)Villemain(at)abcsql(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Change prefetch and read strategies to use range in pg_prewarm ... and raise a question about posix_fadvise WILLNEED |
Date: | 2024-03-07 11:19:16 |
Message-ID: | CAN55FZ1CsDDmhyJq_ZWM1h1eNpBqkxYbBaXVWfkS73MaS3-Rgw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On Wed, 6 Mar 2024 at 18:23, Cédric Villemain
<Cedric(dot)Villemain(at)abcsql(dot)com> wrote:
>
> Hi Nazir,
>
>
> thank you for your review. I comment below.
>
>
> On 05/03/2024 12:07, Nazir Bilal Yavuz wrote:
> >> 2. The second one does implement smgrprefetch with range and loops by
> >> default per segment to still have a check for interrupts.
> > It looks good codewise but RELSEG_SIZE is too big to prefetch. Man
> > page of posix_fadvise [1] states that: "The amount of data read may be
> > decreased by the kernel depending on virtual memory load. (A few
> > megabytes will usually be fully satisfied, and more is rarely
> > useful.)". It is trying to prefetch 1GB data now. That could explain
> > your observation about differences between nr_cache numbers.
>
> From an "adminsys" point of view I will find beneficial to get a single
> syscall per file, respecting the logic and behavior of underlying system
> call.
I agree.
> The behavior is 100% OK, and in fact it might a bad idea to prefetch
> block by block as the result is just to put more pressure on a system if
> it is already under pressure.
>
> Though there are use cases and it's nice to be able to do that too at
> this per page level.
Yes, I do not know which one is more important, cache more blocks but
create more pressure or create less pressure but cache less blocks.
Also, pg_prewarm is designed to be run at startup so I guess there
will not be much pressure.
> About [1], it's very old statement about resources. And Linux manages a
> part of the problem for us here I think [2]:
>
> /*
> * Chunk the readahead into 2 megabyte units, so that we don't pin too much
> * memory at once.
> */
> void force_page_cache_ra(....)
Thanks for pointing out the actual code. Yes, it looks like the kernel
is already doing that. I would like to do more testing when you
forward vm_relation functions into pgfincore.
> >> Q: posix_fadvise may not work exactly the way you think it does, or does
> >> it ?
> >>
> >>
> >> In details, and for the question:
> >>
> >> However, if instead you provide a real range, or the magic len=0 to
> >> posix_fadvise, then blocks are "more" loaded according to effective vm
> >> pressure (which is not the case on the previous example).
> >> As a result only a small part of the relation might be loaded, and this
> >> is probably not what end-users expect despite being probably a good
> >> choice (you can still free cache beforehand to help the kernel).
>
> I think it's a matter of documenting well the feature, and if at all
> possible, as usual, not let users be negatively impacted by default.
>
>
> >> An example, below I'm using vm_relation_cachestat() which provides linux
> >> cachestat output, and vm_relation_fadvise() to unload cache, and
> >> pg_prewarm for the demo:
> >>
> >> # clear cache: (nr_cache is the number of file system pages in cache,
> >> not postgres blocks)
> >>
> >> ```
> >> postgres=# select block_start, block_count, nr_pages, nr_cache from
> >> vm_relation_cachestat('foo',range:=1024*32);
> >> block_start | block_count | nr_pages | nr_cache
> >> -------------+-------------+----------+----------
> >> 0 | 32768 | 65536 | 0
> >> 32768 | 32768 | 65536 | 0
> >> 65536 | 32768 | 65536 | 0
> >> 98304 | 32768 | 65536 | 0
> >> 131072 | 1672 | 3344 | 0
> >> ```
> >>
> >> # load full relation with pg_prewarm (patched)
> >>
> >> ```
> >> postgres=# select pg_prewarm('foo','prefetch');
> >> pg_prewarm
> >> ------------
> >> 132744
> >> (1 row)
> >> ```
> >>
> >> # Checking results:
> >>
> >> ```
> >> postgres=# select block_start, block_count, nr_pages, nr_cache from
> >> vm_relation_cachestat('foo',range:=1024*32);
> >> block_start | block_count | nr_pages | nr_cache
> >> -------------+-------------+----------+----------
> >> 0 | 32768 | 65536 | 320
> >> 32768 | 32768 | 65536 | 0
> >> 65536 | 32768 | 65536 | 0
> >> 98304 | 32768 | 65536 | 0
> >> 131072 | 1672 | 3344 | 320 <-- segment 1
> >>
> >> ```
> >>
> >> # Load block by block and check:
> >>
> >> ```
> >> postgres=# select from generate_series(0, 132743) g(n), lateral
> >> pg_prewarm('foo','prefetch', 'main', n, n);
> >> postgres=# select block_start, block_count, nr_pages, nr_cache from
> >> vm_relation_cachestat('foo',range:=1024*32);
> >> block_start | block_count | nr_pages | nr_cache
> >> -------------+-------------+----------+----------
> >> 0 | 32768 | 65536 | 65536
> >> 32768 | 32768 | 65536 | 65536
> >> 65536 | 32768 | 65536 | 65536
> >> 98304 | 32768 | 65536 | 65536
> >> 131072 | 1672 | 3344 | 3344
> >>
> >> ```
> >>
> >> The duration of the last example is also really significant: full
> >> relation is 0.3ms and block by block is 1550ms!
> >> You might think it's because of generate_series or whatever, but I have
> >> the exact same behavior with pgfincore.
> >> I can compare loading and unloading duration for similar "async" work,
> >> here each call is from block 0 with len of 132744 and a range of 1 block
> >> (i.e. posix_fadvise on 8kB at a time).
> >> So they have exactly the same number of operations doing DONTNEED or
> >> WILLNEED, but distinct duration on the first "load":
> >>
> >> ```
> >>
> >> postgres=# select * from
> >> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_DONTNEED');
> >> vm_relation_fadvise
> >> ---------------------
> >>
> >> (1 row)
> >>
> >> Time: 25.202 ms
> >> postgres=# select * from
> >> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');
> >> vm_relation_fadvise
> >> ---------------------
> >>
> >> (1 row)
> >>
> >> Time: 1523.636 ms (00:01.524) <----- not free !
> >> postgres=# select * from
> >> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');
> >> vm_relation_fadvise
> >> ---------------------
> >>
> >> (1 row)
> >>
> >> Time: 24.967 ms
> >> ```
> > I confirm that there is a time difference between calling pg_prewarm
> > by full relation and block by block, but IMO this is expected. When
> > pg_prewarm is called by full relation, it does the initialization part
> > just once but when it is called block by block, it does initialization
> > for each call, right?
>
>
> Not sure what initialization is here exactly, in my example with
> WILLNEED/DONTNEED there are exactly the same code pattern and syscall
> request(s), just the flag is distinct, so initialization cost are
> expected to be very similar.
Sorry, there was a miscommunication. I was talking about pg_prewarm's
initialization, meaning if the pg_prewarm is called block by block (by
using generate_series); it will make block_count times initialization
and if it is called by full relation it will just do it once but it
seems that is not the case, see below.
> I'll try to move forward on those vm_relation functions into pgfincore
> so it'll be easier to run similar tests and compare.
Thanks, that will be helpful for the testing.
> >
> > I run 'select pg_prewarm('foo','prefetch', 'main', n, n) FROM
> > generate_series(0, 132744)n;' a couple of times consecutively but I
> > could not see the time difference between first run (first load) and
> > the consecutive runs. Am I doing something wrong?
>
>
> Maybe the system is overloaded and thus by the time you're done
> prefetching tail blocks, the heads ones have been dropped already. So
> looping on that leads to similar duration.
> If it's already in cache and not removed from it, execution time is
> stable. This point (in cache or not) is hard to guess right until you do
> check the status, or you ensure to clean it first.
My bad. I was trying to drop buffers from the postgres cache, not from
the kernel cache. See my results now:
patched | prefetch test
$ create_the_data [3]
$ drop_kernel_cache [4]
$ first_run_full_relation_prefetch [5] -> Time: 11.395 ms
$ second_run_full_relation_prefetch [5] -> Time: 0.887 ms
master | prefetch test
$ create_the_data [3]
$ drop_kernel_cache [4]
$ first_run_full_relation_prefetch [5] -> Time: 3208.944 ms
$ second_run_full_relation_prefetch [5] -> Time: 283.905 ms
I did more perf tests about comparison between first and second run
for the prefetch and found this on master:
first run:
- 86.40% generic_fadvise
- 86.24% force_page_cache_ra
- 85.99% page_cache_ra_unbounded
+ 37.36% filemap_add_folio
+ 34.14% read_pages
+ 8.31% folio_alloc
+ 4.55% up_read
0.77% xa_load
second run:
- 20.64% generic_fadvise
- 18.64% force_page_cache_ra
- 17.46% page_cache_ra_unbounded
+ 8.54% xa_load
2.82% down_read
2.29% read_pages
1.45% up_read
So, it looks like the difference between the first and the second run
comes from kernel optimization that does not do prefetch if the page
is already in the cache [6]. Saying that, I do not know the difference
between WILLNEED/DONTNEED and I do not have enough materials to test
it but I guess it is something similar.
I did not test read performance but I am planning to do that soon.
> > [1] https://man7.org/linux/man-pages/man2/posix_fadvise.2.html#DESCRIPTION
>
> [2] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L303
[3]
CREATE EXTENSION pg_prewarm;
drop table if exists foo;
create table foo ( id int, c text) with (autovacuum_enabled=false);
insert into foo select i, repeat('a', 1000) from generate_series(1,10000000)i;
[4] echo 3 | sudo tee /proc/sys/vm/drop_caches
[5] select pg_prewarm('foo', 'prefetch', 'main');
[6] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L232
--
Regards,
Nazir Bilal Yavuz
Microsoft
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2024-03-07 11:28:49 | Re: a wrong index choose when statistics is out of date |
Previous Message | David Rowley | 2024-03-07 11:16:05 | Re: Properly pathify the union planner |