From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | jeremy(at)jeremya(dot)com |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: pgsql BLOB issues |
Date: | 2003-04-28 05:00:01 |
Message-ID: | 11754.1051506001@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Jeremy Andrus <jeremy(at)jeremya(dot)com> writes:
> I have a database that contains a large amount of Large Objects
> (>500MB). I am using this database to store images for an e-commerce
> website, so I have a simple accessor script written in perl to dump out
> a blob based on a virtual 'path' stored in a table (and associated with
> the large object's OID). This system seemed to work wonderfully until I
> put more than ~500MB of binary data into the database.
Are you talking about 500MB in one BLOB, or 500MB total?
If the former, I can well imagine swap thrashing being a problem when
you try to access such a large blob.
If the latter, I can't think of any reason for total blob storage to
cause any big performance issue. Perhaps you just haven't vacuumed
pg_largeobject in a long time?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeremy Andrus | 2003-04-28 05:33:38 | Re: pgsql BLOB issues |
Previous Message | Jeremy Andrus | 2003-04-28 02:30:23 | pgsql BLOB issues |