| From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
|---|---|
| To: | Habib Nahas <habibnahas(at)gmail(dot)com> |
| Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Autoanalyze CPU usage |
| Date: | 2017-12-19 22:55:57 |
| Message-ID: | 20171219225557.GG18184@telsasoft.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On Tue, Dec 19, 2017 at 02:37:18PM -0800, Habib Nahas wrote:
> As it happens our larger tables operate as a business log and are also
> insert only.
>
> - There is no partitioning at this time since we expect to have an
> automated process to delete rows older than a certain date.
This is a primary use case for partitioning ; bulk DROP rather than DELETE.
> - Analyzing doing off-hours sounds like a good idea; if there is no other
> way to determine effect on db we may end up doing that.
You can also implement a manual analyze job and hope to avoid autoanalyze.
> - We have an open schema and heavily depend on jsonb, so I'm not sure if
> increasing the statistics target will be helpful.
If the increased stats target isn't useful for that, I would recommend to
decrease it.
--
Justin Pryzby
System Administrator
Telsasoft
+1-952-707-8581
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Michaeldba@sqlexec.com | 2017-12-20 01:10:51 | Re: Autoanalyze CPU usage |
| Previous Message | Habib Nahas | 2017-12-19 22:53:59 | Re: Autoanalyze CPU usage |