From: | Daulat Ram <Daulat(dot)Ram(at)exponential(dot)com> |
---|---|
To: | "hans(dot)schou(at)gmail(dot)com" <hans(dot)schou(at)gmail(dot)com> |
Cc: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | RE: Memory settings |
Date: | 2019-07-01 09:42:28 |
Message-ID: | BL0PR01MB51871823B453CA2CADF4633C9BF90@BL0PR01MB5187.prod.exchangelabs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello Hans,
Thanks for your reply. Yes, we are facing performance issue.
Current output of query is:
postgres=# SELECT pg_stat_database.datname,
postgres-# pg_stat_database.blks_read,
postgres-# pg_stat_database.blks_hit,
postgres-# round((pg_stat_database.blks_hit::double precision
postgres(# / (pg_stat_database.blks_read
postgres(# + pg_stat_database.blks_hit
postgres(# +1)::double precision * 100::double precision)::numeric, 2) AS cachehitratio
postgres-# FROM pg_stat_database
postgres-# WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
postgres-# ORDER BY round((pg_stat_database.blks_hit::double precision
postgres(# / (pg_stat_database.blks_read
postgres(# + pg_stat_database.blks_hit
postgres(# + 1)::double precision * 100::double precision)::numeric, 2) DESC;
datname | blks_read | blks_hit | cachehitratio
--------------+-----------+-----------+---------------
kbcc_eng_ret | 1192 | 269999 | 99.56
nagios | 178 | 37185 | 99.52
kccm | 1431 | 214501 | 99.34
kbbm | 1944006 | 157383222 | 98.78
Thanks,
Daulat
From: Hans Schou <hans(dot)schou(at)gmail(dot)com<mailto:hans(dot)schou(at)gmail(dot)com>>
Sent: Sunday, June 30, 2019 11:35 AM
To: Daulat Ram <Daulat(dot)Ram(at)exponential(dot)com<mailto:Daulat(dot)Ram(at)exponential(dot)com>>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org<mailto:pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Memory settings
Try run postgresqltuner.pl<http://postgresqltuner.pl> as suggested on https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and also look at the other info there.
After running a few days with live data run cache_hit_ratio.sql by Melvin Davidson:
SELECT pg_stat_database.datname,
pg_stat_database.blks_read,
pg_stat_database.blks_hit,
round((pg_stat_database.blks_hit::double precision
/ (pg_stat_database.blks_read
+ pg_stat_database.blks_hit
+1)::double precision * 100::double precision)::numeric, 2) AS cachehitratio
FROM pg_stat_database
WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
ORDER BY round((pg_stat_database.blks_hit::double precision
/ (pg_stat_database.blks_read
+ pg_stat_database.blks_hit
+ 1)::double precision * 100::double precision)::numeric, 2) DESC;
The real question is: Is your system slow?
On Sun, Jun 30, 2019 at 5:14 AM Daulat Ram <Daulat(dot)Ram(at)exponential(dot)com<mailto:Daulat(dot)Ram(at)exponential(dot)com>> wrote:
Hi team,
Can you please suggest what will be the suitable memory settings for Postgresql11 if we have 80gb RAM, 16 CPU’s and OS Linux.
If we set 25 % of total RAM then shared_buffers value will be 20GB. Will it be useful or we can set it any random vale like 8g or 12gb.
According to https://pgtune.leopard.in.ua/#/
below are the suggested memory values for 80gb RAM and 16 CPU. I assume the values preferred for effective_cache_size = 60GB and shared_buffers = 20GB are too large.
max_connections = 500
shared_buffers = 20GB
effective_cache_size = 60GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 300
work_mem = 6553kB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 16
max_parallel_workers_per_gather = 8
max_parallel_workers = 16
Please give your suggestions.
Regards,
Daulat
From | Date | Subject | |
---|---|---|---|
Next Message | Achilleas Mantzios | 2019-07-01 09:50:22 | Re: Use ctid in where clause in update from statement |
Previous Message | Thomas Kellerer | 2019-07-01 09:35:49 | Re: Use ctid in where clause in update from statement |