From: | Guy Helmer <ghelmer(at)palisadesys(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Invalid memory alloc request |
Date: | 2009-08-25 16:52:05 |
Message-ID: | 4A9416B5.8080603@palisadesys.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane wrote:
> Guy Helmer <ghelmer(at)palisadesys(dot)com> writes:
>
>> Tom Lane wrote:
>>
>>> Normally I'd say "data corruption", but it is odd if you got the
>>> identical message from two different machines. Can you reproduce
>>> it with a debugger attached? If so, a backtrace from the call of
>>> errfinish might be useful.
>>>
>
>
>> Yes, here is the backtrace.
>>
>
> Well, that looks just about like you'd expect for a bytea column.
>
> Hmm ... you mentioned 500MB total in the textdata column. Is it
> possible that that's nearly all in one entry? It's conceivable
> that the text representation of the entry is simply too large.
> (The next question of course would be how you got the entry in there,
> but maybe it was submitted in binary protocol, or built by
> concatenation.)
>
> regards, tom lane
>
On the system where I captured the backtrace, there are several
400MB-long entries in the textdata column. I inserted these entries by
doing an "INSERT (..., textdata) VALUES (..., $1)", mmap'ed the data
from a file into memory, and executed the command using PQexecParams().
Is there a quantifiable limit to the size of values I insert into a
bytea column? I haven't found a limit documented anywhere...
Thanks,
Guy
--
From | Date | Subject | |
---|---|---|---|
Next Message | james bardin | 2009-08-25 16:54:54 | Re: warm standby and reciprocating failover |
Previous Message | Alvaro Herrera | 2009-08-25 16:41:22 | Re: R: Field's position in Table |