From: | Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> |
---|---|
To: | Andy Fan <zhihui(dot)fan1213(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Dynamic gathering the values for seq_page_cost/xxx_cost |
Date: | 2020-09-21 13:03:35 |
Message-ID: | CAExHW5snRxq_rbtodUO8XJFvf3g5p48RT-q57oKP3t=ZbVw2CQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Sep 21, 2020 at 9:11 AM Andy Fan <zhihui(dot)fan1213(at)gmail(dot)com> wrote:
>
> Here are some changes for my detection program.
>
> | | seq_read_lat (us) | random_read_lat (us) |
> | FIO | 12 | 148 |
> | Previous main.c | 8.5 | 74 |
> | invalidate_device_cache before each testing | 9 | 150 |
> | prepare the test data file with O_DIRECT option | 15 | 150 |
>
> In invalidate_device_cache, I just create another 1GB data file and read
> it. (see invalidate_device_cache function) this is similar as the previous fio setup.
>
> prepare test data file with O_DIRECT option means in the past, I prepare the test
> file with buffer IO. and before testing, I do invalidate device cache, file
> system cache. but the buffered prepared file still get better performance, I
> have no idea of it. Since I don't want any cache. I use O_DIRECT
> option at last. The seq_read_lat changed from 9us to 15us.
> I still can't find out the 25% difference with the FIO result. (12 us vs 9 us).
>
> At last, the random_page_cost happens to not change very much.
>
> /u/y/g/fdirect> sudo ./main
> fs_cache_lat = 0.569031us, seq_read_lat = 18.901749us, random_page_lat = 148.650589us
>
> cache hit ratio: 1.000000 random_page_cost 1.000000
> cache hit ratio: 0.900000 random_page_cost 6.401019
> cache hit ratio: 0.500000 random_page_cost 7.663772
> cache hit ratio: 0.100000 random_page_cost 7.841498
> cache hit ratio: 0.000000 random_page_cost 7.864383
>
> This result looks much different from "we should use 1.1 ~ 1.5 for SSD".
>
Very interesting. Thanks for working on this. In an earlier email you
mentioned that TPCH plans changed to efficient ones when you changed
random_page_cost = =8.6 from 4 and seq_page_cost was set to 1. IIUC,
setting random_page_cost to seq_page_cost to the same ratio as that
between the corresponding latencies improved the plans. How about
trying this with that ratio set to the one obtained from the latencies
provided by FIO? Do we see any better plans?
page cost is one thing, but there are CPU costs also involved in costs
of expression evaluation. Should those be changed accordingly to the
CPU latency?
--
Best Wishes,
Ashutosh Bapat
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Cramer | 2020-09-21 13:08:49 | Re: PATCH: Batch/pipelining support for libpq |
Previous Message | Ashutosh Sharma | 2020-09-21 12:57:03 | Re: Logical replication from PG v13 and below to PG v14 (devel version) is not working. |