From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | Dan Boitnott <dan(at)mcneese(dot)edu> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Large Objects |
Date: | 2004-12-31 06:21:40 |
Message-ID: | 20041231062140.GC17555@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Dec 27, 2004 at 10:39:48 -0600,
Dan Boitnott <dan(at)mcneese(dot)edu> wrote:
> I need to do some investigation into the way Postgres handles large
> objects for a major project involving large objects. My questions are:
I don't know the answer to all of your questions.
> * Is it practical/desirable to store files MIME-Encoded inside a
> text field?
This should be possible if the files aren't too large. bytea is another type
that might be better to use.
> * The obvious disadvantages:
> * slow, Slow, SLOW
If you always need to access the whole file this might not be too bad.
But if you only need to access a small part, you are going to pay a big
cost as the whole record will need to be retrieved before you can pick
out the part you want.
> * significant increase in per-file storage requirements
It might not be too bad as large records can be compressed. That should get
back some of the bloat from uuencoding.
From | Date | Subject | |
---|---|---|---|
Next Message | Tatsuo Ishii | 2004-12-31 06:55:37 | Re: pg_dump and pgpool |
Previous Message | Jaime Casanova | 2004-12-31 06:19:05 | Re: [PATCHES] reqd patch |