From: | Saurabh Nanda <saurabhnanda(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Will higher shared_buffers improve tpcb-like benchmarks? |
Date: | 2019-01-29 11:39:14 |
Message-ID: | CAPz=2oEnzLofpgZnAwmwaZtW-OLn5+MFRux9uFnnWs2+YecZAA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
I'm going crazy trying to optimise my Postgres config for a production
setting [1] Once I realised random changes weren't getting my anywhere, I
finally purchased PostgreSQL 10 - Higher Performance [2] and understood the
impact of shared_buffers.
IIUC, shared_buffers won't have any significant impact in the following
scenario, right?
-- DB size = 30GB
-- shared_buffers = 2GB
-- workload = tpcb-like
This is because the tpcb-like workload selects & updates random rows from
the DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7%
chance (did I get my probability correct?) that the required data will be
in the shared_buffer. Did I understand this correctly?
If nothing else becomes the bottleneck (eg. periodically writing dirty
pages to disk), increasing the shared_buffers to 15GB+ should have a
significant impact, for this DB-size and workload, right? (The system has
64 GB RAM)
[1] Related thread at
https://www.postgresql.org/message-id/flat/CAPz%3D2oGdmvirLNX5kys%2BuiY7LKzCP4sTiXXob39qq6eDkEuk2Q%40mail.gmail.com
[2]
https://www.packtpub.com/big-data-and-business-intelligence/postgresql-10-high-performance
[3] https://www.postgresql.org/docs/11/pgbench.html#id-1.9.4.10.7.2
-- Saurabh.
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Mirabal | 2019-01-29 12:12:38 | Re: Will higher shared_buffers improve tpcb-like benchmarks? |
Previous Message | Fabio Isabettini | 2019-01-29 11:32:47 | Re: dsa_allocate() faliure |