From: | Stefan Froehlich <postgresql(at)froehlich(dot)priv(dot)at> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: large INSERT leads to "invalid memory alloc" |
Date: | 2012-11-25 19:39:45 |
Message-ID: | 20121125193945.GA4894@89-149-202-102.internetserviceteam.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Nov 25, 2012 at 06:57:22PM +0100, Pavel Stehule wrote:
> > | INSERT INTO a (id, fkid, displayorder, name, description, internal, mimetype, mimedata, filesize) VALUES (73,6,5, E'Seefeld.rar', E'Seefeld',0, E'application/octet-stream',decode('5261...0700', 'hex'),311484587);
> Attention - BYTEA is not BLOB and although physical limit is 1G - real
> limit is significantly less - depends on RAM - 7years ago we found so
> practical limit is about 20MB.
Oops, that's not too much. In the docs I've seen a 4-byte length
descriptor, thus expected a size limit of 4 GB and felt quit safe
with a maximum size of 300 MB.
> If you need more, use blobs instead or you can divide value to more blocks
> http://www.fuzzy.cz/en/articles/storing-files-in-a-postgresql-database/
Yes, storing large data objects in the file system is advisable for
several reasons - we've had the same discussion times ago with mysql as
well. But the decision was made to keep it in the database (and there is
only one object of this size anyways). Rewriting the framework is not an
option at the moment.
If I fail to migrate this into postgresql, we'd rather cancel the
transition.
Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | Bexley Hall | 2012-11-25 20:06:38 | Re: Query caching absent "query caching" |
Previous Message | Peter Geoghegan | 2012-11-25 19:05:41 | Re: What happens to a primary key b-tree index when a table tuple is deleted by a transaction? |