Re: use BLOBS or use pointers to files on the filesystem?

From: Jason Earl <jason(dot)earl(at)simplot(dot)com>
To: "Travis Bear" <ec9098(at)www2(dot)us(dot)postgresql(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: use BLOBS or use pointers to files on the filesystem?
Date: 2001-12-10 17:40:17
Message-ID: 87g06jc6ta.fsf@npa01zz001.simplot.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Travis Bear" <ec9098(at)www2(dot)us(dot)postgresql(dot)org> writes:

> Hi
>
> I'm kind of a database newbie in general, so be nice to me if this
> is a bonehead question. I am building a java servlet based web
> application to manage photos and images. My first thougth has been
> to store the images as BLOBs in the database, but I read one user
> say that you're better off not using BLOBs. Instead, he says, store
> your large data objects as files on the filesystem, and just use the
> database to store the path to the file and any file-related meta
> data.

Like most programming questions this involves a series of tradeoffs.
It used to be that PostgreSQL had an 8K limit per row (32K if you
editted the source and compiled your own). PostgreSQL has a large
object interface, but it is even clunkier to use than the filesystem
and almost certainly slower.

Thanks to TOAST the 8K limit is now essentially gone. You can store
fairly large files in PostgreSQL quite easily (their still is a hard
limit, but it is much much larger). However, the BYTEA type still
needs some work (some characters must be escaped before they can be
inserted), and there is a significant overhead involved (large fields
are compressed and then split across several tuples).

Filesystems are *really* good at sharing files. PostgreSQL, being
filesystem based itself, isn't likely to ever really be competitive
with a plain filesystem.

> I wasn't sure what to think of this, so I was hoping to get some
> other ideas and opinions. I expect to accumulate 1-2 gigs of new
> data per year, max, with average image size being 700k.

It depends on what you plan to do with these images. For example, I
am actually storing images (about that size too) directly in
PostgreSQL. In fact, I have even bypassed the BYTEA type and I am
simply base64 encoding the images and inserting them into a big fat
text field. However, I am doing this because these images should get
fetched quite rarely, and when one of them does get fetched it is very
likely that it will get fetched again in short order. So I can easily
afford to base64 encode/decode. PostgreSQL's transactions are useful
to me as well in that it is easier to insure that file changes go as
planned. If an error occurs I simply roll back, and all is well. If
something goes wrong while writing to a filesystem based image I am
left with a corrupted image.

Most importantly, however, I don't have to change my existing
PostgreSQL backup scripts :).

> Also, if I've missed any obvious documentation on this issue please
> feel free to send it my way.

Take a look at the large object material and the TOAST documentation
as they represent the two alternatives that you have when saving
binary (or merely large) fields in Postgresql.

> Thanks!
>
>
> Travis

I hope this was helpful

Jason

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jason Earl 2001-12-10 17:41:12 Re: Inserting the current date
Previous Message Troy.Campano 2001-12-10 17:30:49 Please ignore - test