From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Pradeep <pgundala(at)avineonindia(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: PostgreSQL Database performance |
Date: | 2016-09-06 19:08:37 |
Message-ID: | CAOR=d=1s_tkE3nDjqAmODKWZLB=YtXQTxZgfVsRpUBPMFd-iLA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Sep 2, 2016 at 9:38 PM, Pradeep <pgundala(at)avineonindia(dot)com> wrote:
> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in PostgreSQL
> configuration file it was not reflecting in OS level and also Database
> performance is degrading.
>
>
>
> Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated
> 24GB RAM out of 32GB.
Actually effective_cache_size allocates nothing. It tells the pgsql
server about how much memory the machine it is running on is using for
OS level caching. On 32G machine with 1G or so of shared_buffers that
number is about right.
> However after changing the below parameters, In task bar it is showing 2.7GB
> Utilization even though my utilization is more.
2.7G is ok. Postgresql expects the OS to help out with caching so it
doesn't need to grab all the memory in the machine etc. In fact that
would be counterproductive in most situations.
> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
> max_connections = 100
> shared_buffers = 512MB
> effective_cache_size = 24GB
> work_mem = 110100kB
This is WAY too high for work_mem. Work_mem is how much memory a
single sort can grab at once. Each query may run > 1 sort, and you
could have 100 queries running at once.
This setting is 110GB. That's about 109.9GB too high for safety. When
things go wrong with this too big, they go very wrong, sending the
machine into a swap storm from which it may not return.
> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9
Too high of a checkpoint completion target may cause buffers to get
written out more often than needed. but it varies based on load etc.
> wal_buffers = 16MB
>
> default_statistics_target = 100
It's far more likely that you've just got poorly written queries. I'd
make a post with explain analyze output etc. Here's a good resource
for reporting slow queries:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
--
To understand recursion, one must first understand recursion.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2016-09-06 19:12:59 | Re: [GENERAL] C++ port of Postgres |
Previous Message | Heikki Linnakangas | 2016-09-06 18:58:34 | Re: [GENERAL] C++ port of Postgres |