From: | Neil Conway <nconway(at)klamath(dot)dyndns(dot)org> |
---|---|
To: | PostgreSQL general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Re: Large Objects |
Date: | 2000-09-21 01:14:26 |
Message-ID: | 20000920211426.A274@klamath.dyndns.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Sep 20, 2000 at 03:45:39PM -0700, Tim Kientzle wrote:
> Steven Lacroix asks:
> > ... what kind of performance hits do BLOBS have on a database ...
> > Note that it would be for web database project.
>
> I haven't tried this with PostgreSQL, but I ran some experiments
> to compare the speed of access for large numbers of BLOBs stored
> in a single MySQL table and in a directory in the local filesystem.
>
> For more than 10,000 BLOBs, the database was a clear winner.
> That is, it took less time to locate and read an 8k-16k BLOB
> from the MySQL database than from the local disk. For smaller
> numbers of BLOBs, the filesystem was faster. This is pretty
> much what you should expect: reading a file through a system
> call should be faster than doing a network operation to request
> a BLOB. Conversely, Unix filesystems store directories as unsorted
> lists, which are a lot slower to search than the database's
> structured indexes.
Wow, can anyone confirm this (with Postgres preferrably)? In talking
with some developers at my old job, they all agreed that storing large
pieces of data (1k < x < 16K) was significantly faster on the FS than
in the DB. They were running Postgres 6.5 w/ JDBC on Solaris. I didn't
ask them what gave them this idea, but are they incorrect?
The project I'm working on currently needs to store a large number of
relatively small pieces of data (from 256 chars to a maximum of ~ 64 K
that is rarely used, average ~ 1-2K). Should this data be stored as a LO,
or a file on the FS? There could be up to 1 or 2 million of these
objects (although something around 100,000 would be more common).
Also, will this change with 7.1? I've heard the 8K row-length limit is
going away, so I suppose I could store this data directly in the
DB (as a text column, I guess). At that point, would this be the
best choice?
Thanks in advance,
Neil
--
Neil Conway <neilconway(at)home(dot)com>
Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc
Encrypted mail welcomed
They that give up liberty to obtain a little temporary security deserve
neither liberty nor safety.
-- Benjamin Franklin
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2000-09-21 01:24:17 | Re: perl Pg module and result status |
Previous Message | Adam Haberlach | 2000-09-21 00:03:33 | Public Database of zip code information |