From: | Jeremy Andrus <jeremy(at)jeremya(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | pgsql BLOB issues |
Date: | 2003-04-28 02:30:23 |
Message-ID: | 200304272230.23583@jeremy-rokks |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello,
I have a database that contains a large amount of Large Objects
(>500MB). I am using this database to store images for an e-commerce
website, so I have a simple accessor script written in perl to dump out
a blob based on a virtual 'path' stored in a table (and associated with
the large object's OID). This system seemed to work wonderfully until I
put more than ~500MB of binary data into the database.
Now, every time I run the accessor script (via the web OR the command
line), the postmaster process gobbles up my CPU resources (usually >30%
for a single process - and it's a 1GHz processor with 1GB of RAM!), and
the script takes a very long time to completely dump out the data.
I have the same issue with an import script that reads files from the
hard drive and puts them into Large Objects in the database. It takes a
very long time to import whereas before, it would run extremely fast.
Are there any known issues in PostgreSQL involving databases with a
lot of binary data? I am using PostgreSQL v7.2.3 on a linux system.
Thanks,
-Jeremy
--
------------------------
Jeremy C. Andrus
http://www.jeremya.com/
------------------------
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-04-28 05:00:01 | Re: pgsql BLOB issues |
Previous Message | Hannu Krosing | 2003-04-27 06:50:51 | Re: More tablescanning fun |