large INSERT leads to "invalid memory alloc"

From: Stefan Froehlich <postgresql(at)froehlich(dot)priv(dot)at>
To: pgsql-general(at)postgresql(dot)org
Subject: large INSERT leads to "invalid memory alloc"
Date: 2012-11-25 17:19:04
Message-ID: 20121125171904.GA4226@89-149-202-102.internetserviceteam.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

While converting a mysql database into postgres, I stumbled over the
following problem:

| INSERT INTO a (id, fkid, displayorder, name, description, internal, mimetype, mimedata, filesize) VALUES (73,6,5, E'Seefeld.rar', E'Seefeld',0, E'application/octet-stream',decode('5261...0700', 'hex'),311484587);

As the value for "filesize" suggests, this is a very large BYTEA
(formerly: LONGBLOB) entry with about 300 MB. This is untypical, all
other fields are about a couple of MB and don't make any problems.
This very line leads to:

| sfroehli(at)host:~$ psql dbname < statement.sql
| Password:
| ERROR: invalid memory alloc request size 1073741824

I have not found any configuration directive similar to mysqls
"max_allowed_packet" to increase the buffer size. And besides, I
don't understand, why postgres wants to allocate 1 GB to store
300 MB (which take 600 MB of ASCII text in the decode()-string).

Any idea how to put this into the target database?

Stefan

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Pavel Stehule 2012-11-25 17:57:22 Re: large INSERT leads to "invalid memory alloc"
Previous Message Kevin Grittner 2012-11-25 15:10:17 Re: Query caching absent "query caching"