From: | Peter Erickson <redlamb(at)redlamb(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Cannot allocate memory for output buffer |
Date: | 2009-11-27 22:55:30 |
Message-ID: | 4B1058E2.4000002@redlamb.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks. Out of curiosity, if memory exhaustion was the problem, any idea
why the task manager would show that I'm only using 1.2GB of the 3GB of
memory?
On 11/27/2009 5:15 PM, Tom Lane wrote:
> Pete Erickson <redlamb(at)redlamb(dot)net> writes:
>> I am looking for some help regarding an python OperationalError that I
>> recently received while executing a python script using sqlalchemy and
>> psycopg2. The python script parses an xml file stored on a networked
>> drive and enters the information into a pgsql database. Sometimes
>> these xml files reference a binary file which is also located on the
>> networked drive. These files are subsequently read in and stored in a
>> table along with the file's md5. The binary data is stored within a
>> bytea column. This script worked pretty well until recently when it
>> came across a binary file about 258MB in size. While reading the file
>> off the networked drive I received an OperationalError indicating that
>> it was unable to allocate memory for the output buffer. My initial
>> guess was that it ran out of memory, but according to the task manager
>> the machine had close to 2GB free when the error occurred.
>
> Out of memory is probably exactly right. The textual representation of
> arbitrary bytea data is normally several times the size of the raw bits
> (worst case is 5x bigger, typical case perhaps half that). In addition
> to that you have to consider that there are likely to be several copies
> of the string floating around in your process' memory space. If you're
> doing this in a 32bit environment it doesn't surprise me at all that
> 258MB of raw data would exhaust available memory.
>
> Going to a 64bit implementation would help some, but I'm not sure that
> that's an available option for you on Windows, and anyway it doesn't
> eliminate the problem completely. If you want to process really large
> binary files you're going to need to divide them into segments.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2009-11-27 22:58:37 | Re: vacuumdb -z do a reindex? |
Previous Message | Guillaume Lelarge | 2009-11-27 22:50:54 | Re: empty string causes planner to avoid index. Makes me sad. |