From: | Doug McNaught <doug(at)mcnaught(dot)org> |
---|---|
To: | Eric Davies <Eric(at)barrodale(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: bigger blob rows? |
Date: | 2006-01-18 17:23:31 |
Message-ID: | 874q41xxf0.fsf@asmodeus.mcnaught.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Eric Davies <Eric(at)barrodale(dot)com> writes:
> Back in the days of 7.4.2, we tried storing large blobs (1GB+) in
> postgres but found them too slow because the blob was being chopped
> into 2K rows stored in some other table.
> However, it has occurred to us that if it was possible to configure
> the server to split blobs into bigger pieces, say 32K, our speed
> problems might diminish correspondingly.
> Is there a compile time constant or a run time configuration entry
> that accomplish this?
I *think* the limit would be 8k (the size of a PG page) even if you
could change it. Upping that would require recompiling with PAGE_SIZE
set larger, which would have a lot of other consequences.
-Doug
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2006-01-18 17:43:21 | latest HISTORY file |
Previous Message | Michael Fuhr | 2006-01-18 17:14:09 | Re: Stored Procedues in C |