Large data field causes a backend crash.

From: pgsql-bugs(at)postgresql(dot)org
To: pgsql-bugs(at)postgresql(dot)org
Subject: Large data field causes a backend crash.
Date: 2001-02-05 20:56:08
Message-ID: 200102052056.f15Ku8n71424@hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Robert Bruccoleri (bruc(at)stone(dot)congen(dot)com) reports a bug with a severity of 3
The lower the number the more severe it is.

Short Description
Large data field causes a backend crash.

Long Description
In testing TOAST in PostgreSQL 7.1beta4, I was curious to see
how big a field could actually be handled. I created a simple table
with one text field, seq, and tried using the COPY command to
fill it with a value of length 194325306 characters. It crashed
the system with the following messages:

test=# copy test from '/stf/bruc/RnD/genscan/foo.test';
TRAP: Too Large Allocation Request("!(0 < (size) && (size) <= ((Size) 0xfffffff)):size=268435456 [0x10000000]", File: "mcxt.c", Line: 478)
!(0 < (size) && (size) <= ((Size) 0xfffffff)) (0) [No such file or directory]
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Server process (pid 2109589) exited with status 134 at Mon Feb 5 15:20:42 2001
Terminating any active server processes...
The Data Base System is in recovery mode

----------------------------------------------------------------------

I have tried a field of length 52000000 characters, and that worked
fine (very impressive!).

The system should gracefully exit from an oversize record.

Sample Code

No file was uploaded with this report

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Myron Scott 2001-02-05 20:56:58 Re: PROC struct
Previous Message Roland Schulz 2001-02-05 20:48:28 UNION and VIEW