Re: Bad planning data resulting in OOM killing of postgres

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: David Hinkle <hinkle(at)cipafilter(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Bad planning data resulting in OOM killing of postgres
Date: 2017-02-13 20:41:19
Message-ID: CAMkU=1xS=HTh0msPkLarYD4aNfL0t+KeuE18dvR6AG8k32mE+A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Feb 13, 2017 at 11:53 AM, David Hinkle <hinkle(at)cipafilter(dot)com>
wrote:

> Thanks guys, here's the information you requested:
>
> psql:postgres(at)cipafilter = show work_mem;
> work_mem
> ──────────
> 10MB
> (1 row)
>

OK, new theory then. Do you have triggers on or foreign key constraints to
the table you are deleting from? It queues up each deleted row to go back
and fire the trigger or validate the constraint at the end of the
statement. You might need to drop the constraint, or delete in smaller
batches by adding some kind of dummy condition to the WHERE clause which
you progressively move.

Or select the rows you want to keep into a new table, and then drop the old
one, rename the new one, and rebuild any constraints or indexes and other
dependencies. This can be pretty annoying if there a lot of them.

Cheers,

Jeff

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Hinkle 2017-02-13 20:43:10 Re: Bad planning data resulting in OOM killing of postgres
Previous Message David Hinkle 2017-02-13 19:53:01 Re: Bad planning data resulting in OOM killing of postgres