Re: Proposal: Improve bitmap costing for lossy pages

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexander Kumenkov <a(dot)kuzmenkov(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Proposal: Improve bitmap costing for lossy pages
Date: 2017-09-04 05:48:17
Message-ID: CAFiTN-sNOay1LDwq5w9=m5_exA49bwnO9HZY4OehZbZVb0nCuQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Aug 31, 2017 at 11:27 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

I have repeated one of the tests after fixing the problems pointed by
you but this time results are not that impressive. Seems like below
check was the problem in the previous patch

if (tbm->nentries > tbm->maxentries / 2)
tbm->maxentries = Min(tbm->nentries, (INT_MAX - 1) / 2) * 2;

Because we were lossifying only till tbm->nentries becomes 90% of
tbm->maxentries but later we had this check which will always be true
and tbm->maxentries will be doubled and that was the main reason of
huge reduction of lossy pages, basically, we started using more
work_mem in all the cases.

I have taken one reading just to see the impact after fixing the
problem with the patch.

Work_mem: 40 MB
(Lossy Pages count)

Query head patch
6 995223 733087
14 337894 206824
15 995417 798817
20 1654016 1588498

Still, we see a good reduction in lossy pages count. I will perform
the test at different work_mem and for different values of
TBM_FILFACTOR and share the number soon.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavan Deolasee 2017-09-04 05:50:46 Re: Parallel worker error
Previous Message Michael Paquier 2017-09-04 05:26:37 Re: pg_basebackup throttling doesn't throttle as promised