From: | "Mario Weilguni" <mario(dot)weilguni(at)icomedias(dot)com> |
---|---|
To: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Inefficient handling of LO-restore + Patch |
Date: | 2002-04-15 09:24:40 |
Message-ID: | D143FBF049570C4BB99D962DC25FC2D21780F8@freedom.icomedias.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>"Mario Weilguni" <mario(dot)weilguni(at)icomedias(dot)com> writes:
>> And I did not find out how I can detect the large object
>> chunksize, either from getting it from the headers (include
>> "storage/large_object.h" did not work)
>
>Why not?
>
>Still, it might make sense to move the LOBLKSIZE definition into
>pg_config.h, since as you say it's of some interest to clients like
>pg_dump.
I tried another approach to detect the LOBLKSIZE of the destination server:
* at restore time, create a LO large enough to be split in two parts (e.g. BLCSIZE+1)
* select octet_length(data) from pg_largeobject where loid=OIDOFOBJECT and pageno=0
* select lo_unlink(OIDOFOBJECT)
IMO this gives the advantage that the LOBLKSIZE is taken from the database I'm restoring to, and not a constant defined at compile time. Otherwise, it wastes an OID.
Is there a way to get compile-time settings (such as BLCSIZE, LOBLKSIZE and such via functions - e.g.
select pginternal('BLCSIZE') or something similar?
I tested with and without my patch against 2 Gigabytes of LO's using MD5, and got exactly the same result on all 25000 large objects. So I think my patch is safe. If there's interest for integration into pg_dump, I'll prepare a patch for the current CVS version.
From | Date | Subject | |
---|---|---|---|
Next Message | Denis Perchine | 2002-04-15 09:26:25 | Re: Importing Large Amounts of Data |
Previous Message | Christopher Kings-Lynne | 2002-04-15 09:15:31 | Re: Importing Large Amounts of Data |