From: | Eric Davies <Eric(at)barrodale(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | bigger blob rows? |
Date: | 2006-01-18 17:00:47 |
Message-ID: | 6.2.5.6.0.20060118082441.030ca620@barrodale.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Back in the days of 7.4.2, we tried storing large blobs (1GB+) in
postgres but found them too slow because the blob was being chopped
into 2K rows stored in some other table.
However, it has occurred to us that if it was possible to configure
the server to split blobs into bigger pieces, say 32K, our speed
problems might diminish correspondingly.
Is there a compile time constant or a run time configuration entry
that accomplish this?
Thank you.
**********************************************
Eric Davies, M.Sc.
Barrodale Computing Services Ltd.
Tel: (250) 472-4372 Fax: (250) 472-4373
Web: http://www.barrodale.com
Email: eric(at)barrodale(dot)com
**********************************************
Mailing Address:
P.O. Box 3075 STN CSC
Victoria BC Canada V8W 3W2
Shipping Address:
Hut R, McKenzie Avenue
University of Victoria
Victoria BC Canada V8W 3W2
**********************************************
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2006-01-18 17:14:09 | Re: Stored Procedues in C |
Previous Message | P.Rizzi Ag.Mobilità Ambiente | 2006-01-18 17:00:45 | JDBC query creates a suspended Linux process |