From: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
---|---|
To: | Andy Colson <andy(at)squeakycode(dot)net> |
Cc: | pgsql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_dump out of memory |
Date: | 2018-07-04 05:31:29 |
Message-ID: | CAKJS1f-3TpWoUWKVzY=xVuFk269oPD7NAyuBarpZY8Hvz4KwZQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 4 July 2018 at 14:43, Andy Colson <andy(at)squeakycode(dot)net> wrote:
> I moved a physical box to a VM, and set its memory to 1Gig. Everything
> runs fine except one backup:
>
>
> /pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep
>
> g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult() failed.
> pg_dump: Error message from server: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.> pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
> stdout;
There will be less memory pressure on the server if the pg_dump was
performed from another host. When running pg_dump locally the 290MB
bytea value will be allocated in both the backend process pg_dump is
using and pg_dump itself. Running the backup remotely won't require
the latter to be allocated on the server.
> I've been reducing my memory settings:
>
> maintenance_work_mem = 80MB
> work_mem = 5MB
> shared_buffers = 200MB
You may also get it to work by reducing shared_buffers further.
work_mem won't have any affect, neither will maintenance_work_mem.
Failing that, the suggestions of more RAM and/or swap look good.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Łukasz Jarych | 2018-07-04 06:20:17 | Re: problem wirh irc ffreenode |
Previous Message | jbrant | 2018-07-04 05:17:43 | Re: Parallel Aware |