From: | Volkan YAZICI <yazicivo(at)ttnet(dot)net(dot)tr> |
---|---|
To: | Eric Davies <Eric(at)barrodale(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: bigger blob rows? |
Date: | 2006-01-18 18:10:29 |
Message-ID: | 20060118181029.GF578@alamut |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Jan 18 09:00, Eric Davies wrote:
> Back in the days of 7.4.2, we tried storing large blobs (1GB+) in
> postgres but found them too slow because the blob was being chopped
> into 2K rows stored in some other table.
> However, it has occurred to us that if it was possible to configure
> the server to split blobs into bigger pieces, say 32K, our speed
> problems might diminish correspondingly.
> Is there a compile time constant or a run time configuration entry
> that accomplish this?
include/storage/large_object.h:64: #define LOBLKSIZE (BLCKSZ / 4)
include/pg_config_manual.h:26: #define BLCKSZ 8192
HTH.
Regards.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2006-01-18 18:11:38 | Re: latest HISTORY file |
Previous Message | Bruce Momjian | 2006-01-18 18:08:30 | Re: latest HISTORY file |