From: | John R Pierce <pierce(at)hogranch(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Scanning a large binary field |
Date: | 2009-03-15 21:06:37 |
Message-ID: | 49BD6DDD.4090502@hogranch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Kynn Jones wrote:
> I have a C program that reads a large binary file, and uses the read
> information plus some user-supplied arguments to generate an in-memory
> data structure that is used during the remainder of the program's
> execution. I would like to adapt this code so that it gets the
> original binary data from a Pg database rather than a file.
>
> One very nice feature of the original scheme is that the reading of
> the original file was done piecemeal, so that the full content of the
> file (which is about 0.2GB) was never in memory all at once, which
> kept the program's memory footprint nice and small.
>
> Is there any way to replicate this small memory footprint if the
> program reads the binary data from a Pg DB instead of from a file?
is this binary data in any way record or table structured such that it
could be stored as multiple rows and perrhaps fields? if not, why
would you want to put a 200MB blob of amorphous data into a relational
database?
From | Date | Subject | |
---|---|---|---|
Next Message | Kynn Jones | 2009-03-15 21:20:39 | Re: Scanning a large binary field |
Previous Message | Kynn Jones | 2009-03-15 20:42:24 | Scanning a large binary field |