From: | Tim Kientzle <kientzle(at)acm(dot)org> |
---|---|
To: | PostgreSQL general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Using BLOBs with PostgreSQL |
Date: | 2000-10-07 22:52:34 |
Message-ID: | 39DFA932.31834C8D@acm.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
I'm evaluating a couple of different databases for use as
the back-end to a web-based publishing system that's currently
being developed in Java and Perl.
I want to keep _all_ of the data in the database, to
simplify future replication and data management. That
includes such data as GIF images, large HTML files,
even multi-megabyte downloadable software archives.
I've been using MySQL for initial development; it has pretty
clean and easy-to-use BLOB support. You just declare a BLOB
column type, then read and write arbitrarily large chunks of data.
In Perl, BLOB columns work just like varchar columns; in JDBC,
the getBinaryStream()/setBinaryStream() functions provide support
for streaming large data objects.
How well-supported is this functionality in PostgreSQL?
I did some early experimenting with PG, but couldn't
find any column type that would accept binary data
(apparently PG's parser chokes on null characters?).
I've heard about TOAST, but have no idea what it really
is, how to use it, or how well it performs. I'm leery
of database-specific APIs.
- Tim Kientzle
From | Date | Subject | |
---|---|---|---|
Next Message | Martin A. Marques | 2000-10-07 23:11:03 | Re: Using BLOBs with PostgreSQL |
Previous Message | Frank Joerdens | 2000-10-07 19:31:26 | Re: How does TOAST compare to other databases' mechanisms? |
From | Date | Subject | |
---|---|---|---|
Next Message | Martin A. Marques | 2000-10-07 23:11:03 | Re: Using BLOBs with PostgreSQL |
Previous Message | Christof Petig | 2000-10-07 21:19:23 | Re: postgres functions and C++ |