From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Bob Jolliffe <bobjolliffe(at)gmail(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: does max_connections affect the query planner |
Date: | 2019-09-17 14:25:19 |
Message-ID: | CAMkU=1zewgepVrH6YcSzWKBo220oRJX4iV6U8_7r+RRSG9Q9Ew@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Sep 17, 2019 at 4:41 AM Bob Jolliffe <bobjolliffe(at)gmail(dot)com> wrote:
> Hi
>
> We are trying to diagnose why postgres might be making poor decisions
> regarding query plans. One theory is that it does not assume it has
> the memory suggested in effective_cache_size.
> We do know that max_connections is set quite high (600) when we don't
> really expect more than 100. I wonder does the planner take
> max_connections x work_mem into account when considering the memory it
> has potentially available?
>
No, it doesn't try to guess how many connections might be sharing
effective_cache_size. It assumes the entire thing is available to any use
at any given time.
But it is only used for cases where a single query is going to be accessing
blocks over and over again--it estimates that the block will still be in
cache on subsequent visits. But this doesn't work for blocks visited
repeatedly in different queries, either on the same connection or different
ones. There is no notion that some objects might be hotter than others,
other than within one query.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Conley | 2019-09-18 16:41:28 | Question regarding fast-hashing in PGSQL |
Previous Message | Bob Jolliffe | 2019-09-17 14:15:04 | Re: does max_connections affect the query planner |