Re: effective_cache_size

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Joel Benelli <joel(dot)benelli(at)gmail(dot)com>
Cc: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: effective_cache_size
Date: 2018-10-08 14:49:49
Message-ID: 27080.1539010189@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Joel Benelli <joel(dot)benelli(at)gmail(dot)com> writes:
> We have a Linux server 16GB RAM and HD SSD with Postgres 9.5 and
> configuration:
> ...
> effective_cache_size = 12GB

> With these settings the memory usage is between 10 ~ 13GB and when running
> a dump occur errors of "out of memory" preventing new requests. By changing
> the effective_cache_size setting to 6GB, the memory usage is between 8 ~
> 12GB, without errors during the dump.

That seems *very* odd. The only thing effective_cache_size actually
affects is the planner's opinion of the cost of indexscans [1],
so that it might choose different query plans after you change the setting.
But there's no obvious reason why that would lead to memory problems
in one case and not the other.

I'd suggest looking in the postmaster log to discover exactly which query
is failing, and then manually EXPLAIN'ing that query with both settings
to see what plans we are talking about. That might yield some more
insight.

regards, tom lane

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message pavan95 2018-10-09 04:22:47 Re: Error when starting postgres database!!
Previous Message Mark Steben 2018-10-08 14:00:35 Hot standby replication stalled