From: | "Moises Alberto Lindo Gutarra" <mlindo(at)gmail(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Nik <XLPizza(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Out of memory error on pg_restore |
Date: | 2006-03-08 21:10:16 |
Message-ID: | 5db591c00603081310s339e7f0bh@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
other way is to set
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session
Manager\Memory Management
bigger values
but to restore a lot of data on windows take so many time
2006/3/8, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> "Nik" <XLPizza(at)gmail(dot)com> writes:
> > pg_restore: ERROR: out of memory
> > DETAIL: Failed on request of size 32.
> > CONTEXT: COPY lane_data, line 17345022: "<line of data goes here>"
>
> A COPY command by itself shouldn't eat memory. I'm wondering if the
> table being copied into has any AFTER triggers on it (eg for foreign key
> checks), as each pending trigger event uses memory and so a copy of a
> lot of rows could run out.
>
> pg_dump scripts ordinarily load data before creating triggers or foreign
> keys in order to avoid this problem. Perhaps you were trying a
> data-only restore? If so, best answer is "don't do that". A plain
> combined schema+data dump should work.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2006-03-08 21:14:28 | Re: plperl %_SHARED and rollbacks |
Previous Message | Poul Møller Hansen | 2006-03-08 21:05:35 | Re: pg_dump error - filesystem full |