Re: Update statement results in Out of memory

From: Rémi Cura <remi(dot)cura(at)gmail(dot)com>
To: Paul Ramsey <pramsey(at)cleverelephant(dot)ca>
Cc: Ivan Bianchi <ivan(at)wikiloc(dot)com>, PostgreSQL <pgsql-general(at)postgresql(dot)org>
Subject: Re: Update statement results in Out of memory
Date: 2016-07-06 13:42:46
Message-ID: CAJvUf_vLbMXUwrOB=9x59RHw0x30vVYsmKC6V5UzVmVfK+v8hg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

You could check the max number of points in your geometries :

SELECT max(ST_NumPoints(geom))
FROM ...

Of course you could still have invalid / abberant geometry,
which you could also check (ST_IsValid, St_IsSimple).

You could solve both those hypotheses if you could perform your buffer by
batch.

Cheers,
Rémi-C

2016-07-06 15:36 GMT+02:00 Paul Ramsey <pramsey(at)cleverelephant(dot)ca>:

> Running a multi-million row update will take a long time.
> It's possible you've exposed a memory leak in ST_Buffer (the older
> your version of GEOS, the more likely that is) but it's also possible
> you're just running a really long update.
> I find for batch processing purposes that creating fresh tables is far
> preferable:
>
> CREATE TABLE newtable AS SELECT ST_Buffer(geom) ... FROM oldtable;
>
> If you still see memory issues with the above then you probably do
> have a leak, *or* you're just running buffer on a sufficiently large
> input geometry or with a large enough radius to blow up the memory
> naturally.
>
> P
>
>
> On Wed, Jul 6, 2016 at 2:13 AM, Ivan Bianchi <ivan(at)wikiloc(dot)com> wrote:
> > Hello,
> >
> > I am trying to update a column using a PostGIS ST_Buffer function into a
> > table of 4.257.769 rows, but after 6 hours, an Out of memory error
> appears
> > and the kernel starts killing processes until a Kernel Panic shows up.
> >
> > I have simplified the buffer target geometry and also added a gist index
> to
> > that column.
> >
> > The statement is the following:
> >>
> >> psql -h host -U user -W -d database -c "UPDATE table SET buffer =
> >> ST_Buffer(simplified_geometry, 0.005);"
> >
> >
> > After reading and tunning the configuration, I still have the same
> result.
> >
> > Here's the initial memory stats:
> >
> >> total used free shared buff/cache
> available
> >> Mem: 15G 1.5G 12G 503M 1.4G
> >> 13G
> >> Swap: 7.8G 0B 7.8G
> >
> >
> >
> > I'm running out of ideas, as I think the postgresql.conf memory
> parameters
> > are quite low for the machine specs. I understand I can split the process
> > and paginate the rows, but I can't see why I can't deal with this full
> > statement right now.
> >
> > Do you think this issue is related with the postgres memory parameters
> > configuration? Why is not respecting the shared_buffers or
> > effective_cache_size parameters and keeps growing?
> >
> >
> > Here's some info:
> >
> > Machine specs
> >
> > Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz (8 cores)
> > 16 GB of memory
> > Fedora release 23 (Twenty Three)
> > Kernel - 4.5.7-202.fc23.x86_64
> >
> > postgresql.conf
> >
> > effective_cache_size = 5GB
> > shared_buffers = 3GB
> > work_mem = 10MB
> >
> > maintenance_work_mem = 800MB
> > wal_buffers = 16MB
> >
> > Kernel parameters
> >
> > vm.overcommit_memory=2
> >
> > kernel.shmmax = 8340893696
> > kernel.shmall = 2036351
> >
> > Versions:
> >
> > PostgreSQL 9.5.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 5.3.1
> > 20160406 (Red Hat 5.3.1-6), 64-bit
> > POSTGIS="2.2.2 r14797" GEOS="3.5.0-CAPI-1.9.0 r4084" PROJ="Rel. 4.9.1, 04
> > March 2015" GDAL="GDAL 2.0.2, released 2016/01/26" LIBXML="2.9.3"
> > LIBJSON="0.12" RASTER
> >
> >
> > Many thanks,
> >
> > --
> > Ivan
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Christian Castelli 2016-07-06 13:45:06 Re: Avoid deadlocks on alter table
Previous Message Paul Ramsey 2016-07-06 13:36:22 Re: Update statement results in Out of memory