From: | Francisco Reyes <lists(at)stringsutils(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Vivek Khera <vivek(at)khera(dot)org>, PostgreSQL general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_restore out of memory |
Date: | 2007-06-18 19:12:25 |
Message-ID: | cone.1182193945.831106.81364.5001@35st.simplicato.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane writes:
> Keep in mind though that the COPY process is going to involve several
> working copies of that data (at least four that I can think of ---
> line input buffer, field input buffer, constructed text object, and
> constructed tuple).
Will this be for the shared_buffers memory?
> I'm also not clear on whether the 512MB limit you refer to will count
> the PG shared memory area
The OS limit is set to 1.6GB.
I increased the shared_buffers to 450MB and it still failed.
> hundred meg off the top of what a backend can allocate as temporary
> workspace.
Is there anything I can change in my log settings so I can produce something
which will help you narrow down this problem?
> So it seems entirely likely to me that you'd need a ulimit above 512MB
> to push around 84MB fields.
The issue I am trying to figure is which limit.. the OS limit is set to
1.6GB. I am now trying to increase my shared_buffers. So far have them at
450MB and it is still failing.
Will also try the setting Vivek suggested although for that may need to
restart the machine.
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Stark | 2007-06-18 19:20:06 | Re: unexpected shutdown |
Previous Message | Chris Hoover | 2007-06-18 19:10:34 | Re: unexpected shutdown |