From: | Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> |
---|---|
To: | Andy Fan <zhihui(dot)fan1213(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Dynamic gathering the values for seq_page_cost/xxx_cost |
Date: | 2020-09-25 09:15:05 |
Message-ID: | CAExHW5uYER4TTric6VfWLb7kxQesLExXdjrYWyUs7Ukdaa=yHg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 22, 2020 at 10:57 AM Andy Fan <zhihui(dot)fan1213(at)gmail(dot)com> wrote:
>
>
> My tools set the random_page_cost to 8.6, but based on the fio data, it should be
> set to 12.3 on the same hardware. and I do see the better plan as well with 12.3.
> Looks too smooth to believe it is true..
>
> The attached result_fio_mytool.tar.gz is my test result. You can use git show HEAD^^
> is the original plan with 8.6. git show HEAD^ show the plan changes after we changed
> the random_page_cost. git show HEAD shows the run time statistics changes for these queries.
> I also uploaded the test tool [1] for this for your double check.
The scripts seem to start and stop the server, drop caches for every
query. That's where you are seeing that setting random_page_cost to
fio based ratio provides better plans. But in practice, these costs
need to be set on a server where the queries are run concurrently and
repeatedly. That's where the caching behaviour plays an important
role. Can we write a tool which can recommend costs for that scenario?
How do the fio based cost perform when the queries are run repeatedly?
--
Best Wishes,
Ashutosh Bapat
From | Date | Subject | |
---|---|---|---|
Next Message | tsunakawa.takay@fujitsu.com | 2020-09-25 09:21:39 | RE: Transactions involving multiple postgres foreign servers, take 2 |
Previous Message | tsunakawa.takay@fujitsu.com | 2020-09-25 09:01:38 | RE: [Patch] Optimize dropping of relation buffers using dlist |