From: | "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> |
---|---|
To: | 'Thomas Munro' <thomas(dot)munro(at)enterprisedb(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Supporting huge pages on Windows |
Date: | 2016-09-28 06:32:15 |
Message-ID: | 0A3221C70F24FB45833433255569204D1F5F2D89@G01JPEXMBYT05 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
From: Thomas Munro [mailto:thomas(dot)munro(at)enterprisedb(dot)com]
> > huge_pages=off: 70412 tps
> > huge_pages=on : 72100 tps
>
> Hmm. I guess it could be noise or random code rearrangement effects.
I'm not the difference was a random noise, because running multiple set of three runs of pgbench (huge_pages = on, off, on, off, on...) produced similar results. But I expected a bit greater improvement, say, +10%. There may be better benchmark model where the large page stands out, but I think pgbench is not so bad because its random data access would cause TLB cache misses.
> I saw your recent post[2] proposing to remove the sentence about the 512MB
> effective limit and I wondered why you didn't go to larger sizes with a
> larger database and more run time. But I will let others with more
> benchmarking experience comment on the best approach to investigate Windows
> shared_buffers performance.
Yes, I could have gone to 8GB of shared_buffers because my PC has 16GB of RAM, but I felt the number of variations was sufficient. Anyway, positive comments on benchmarking would be appreciated.
Regards
Takayuki Tsunakawa
From | Date | Subject | |
---|---|---|---|
Next Message | Piotr Stefaniak | 2016-09-28 06:40:45 | Re: LLVM Address Sanitizer (ASAN) and valgrind support |
Previous Message | Ashutosh Bapat | 2016-09-28 06:30:00 | Re: Transactions involving multiple postgres foreign servers |