RE: Postgres server 12.2 crash with process exited abnormally and possibly corrupted shared memory

From: Ishan Joshi <Ishan(dot)Joshi(at)amdocs(dot)com>
To: Michael Lewis <mlewis(at)entrata(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: RE: Postgres server 12.2 crash with process exited abnormally and possibly corrupted shared memory
Date: 2020-06-10 06:05:52
Message-ID: AM6PR0602MB339884872FE7EC2ABB1C9D3286830@AM6PR0602MB3398.eurprd06.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi Michael,

Thanks for your response.

Please find answers for your questions
How many rows did these tables have before partitioning? --> We starts test with 0 rows in partition table.
Why did you decide to partition? --> These tables are heave tables with high number of DML operation performed on this tables with high number of rows generated every hour.
Do these list partitions allow for plan-time pruning? --> WE have tune application queries to utilize partition pruning. Still we have 2-3 queries not utilizing partition pruning and we are working on same.
Do they support partition wise joins? --> Most of the queries are querying to single table. We have change our queries that can utilize partition key.
work_mem can be used for each node of the plan and if you are getting parallel scans of many tables or indexes where you previously had one, that could be an issue. --> some of query are scanning indexes on all the partition.

Current work_mem is set with 9MB.
cpu_tuple_cost = 0.03
seq_page_cost = 0.7
random_page_cost=1
huge_pages=off.

Thanks & Regards,
Ishan Joshi

From: Michael Lewis <mlewis(at)entrata(dot)com>
Sent: Wednesday, June 10, 2020 1:23 AM
To: Ishan Joshi <Ishan(dot)Joshi(at)amdocs(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Postgres server 12.2 crash with process exited abnormally and possibly corrupted shared memory

On Tue, Jun 9, 2020 at 8:35 AM Ishan Joshi <Ishan(dot)Joshi(at)amdocs(dot)com<mailto:Ishan(dot)Joshi(at)amdocs(dot)com>> wrote:
I have using postgresql server v12.2 on CentOS Linux release 7.3.1611 (Core).

My application is working fine with non partition tables but recently we are trying to adopt partition table on few of application tables.
So we have created List partition on 6 table. 2 out of 6 tables have 24 partitions and 4 out of 6 tables have 500 list partitions. After performing partition table, we are trying to run our application it is getting crash as I can see the memory utilization is consumed 100% and once it reach to 100% Postgres server getting crash with following error

How many rows did these tables have before partitioning? Why did you decide to partition? Do these list partitions allow for plan-time pruning? Do they support partition wise joins? work_mem can be used for each node of the plan and if you are getting parallel scans of many tables or indexes where you previously had one, that could be an issue.

2000 for max_connections strikes me as quite high. Consider the use of a connection pooler like pgbouncer or pgpool such that Postgres can be run with max connections more like 2-5x your number of CPUs, and those connections get re-used as needed. There is some fixed memory overhead for each potential connection.
This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service <https://www.amdocs.com/about/email-terms-of-service>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Dan shmidt 2020-06-10 06:15:48 Re: Logical replication stuck in catchup state
Previous Message Sebastian Dressler 2020-06-10 05:59:57 Re: Planner misestimation for JOIN with VARCHAR