Re: My Experiment of PG crash when dealing with huge amount of data

From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: 高健 <luckyjackgao(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: My Experiment of PG crash when dealing with huge amount of data
Date: 2013-08-30 13:07:26
Message-ID: CAB7nPqSuLujfaF6rNHDSqiA8b_rr0reSi_mS+PenS6ueS1emvw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Aug 30, 2013 at 6:10 PM, 高健 <luckyjackgao(at)gmail(dot)com> wrote:
> In log, I can see the following:
> LOG: background writer process (PID 3221) was terminated by signal 9:
> Killed
Assuming that no users on your server manually killed this process, or
that no maintenance task you implemented did that, this looks like the
Linux OOM killer because of a memory overcommit. Have a look here for
more details:
http://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
So have a look at dmesg to confirm that, then you could use one of the
strategies described in the docs. Also, as you have been doing a bulk
INSERT, you should as well increase temporarily checkpoint_segments to
reduce the pressure on the background writer by reducing the number of
checkpoints happening. This will also make your data load faster.
--
Michael

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2013-08-30 17:47:02 Re: Unable to CREATE SCHEMA and INSERT data in table in that schema in same EXECUTE
Previous Message Michael Paquier 2013-08-30 12:47:19 Re: Using of replication by initdb for both nodes?