From: | H(dot)Harada <umi(dot)tanuki(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Returning large bytea chunk |
Date: | 2008-02-18 01:18:46 |
Message-ID: | e08cc0400802171718l7a0b0546nccb0adcb7448d6f@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I would like to store very large data into one column, which contains
my own data structure and memory layout. The data will be more than
500M bytes.
I just wrote like:
Datum myfunc(PG_FUNCTION_ARGS){
/* here get my data, allocated ~ 500M data */
...
/* allocate more than 500M */
bytea *result = (bytea*)palloc(size);
VARATT_SIZEP(result) = size;
memcpy(VARDATA(result), my_data, size - VARDATA);
pfree(my_data);
PG_RETURN_BYTEA_P(result);
}
This code succeeded on Linux, while failed on Windows, saying an error
of memory allocation. I guess it hit the per-process memory
limitation. Anyways, I think it is not smart and unefficient, because
from my understanding the postmaster copies a tuple returned by the
user function.
original 500M --> bytea palloc 500M -> postmaster copy 500M -> tuple
on relation
Being so rare case, I would like you all to give me opinions about
what you would do in such case. Write the data down on own-controlled
file? Is it allowed in PG? Or write down on heap relation from user
function? But how?
Note that you could use only one tuple. To split data into several
tuples or datums is disallowed because the data chunk is local
specific structure and has its own memory layout as described at
first.
Regards,
Hitoshi Harada
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Hart | 2008-02-18 02:24:44 | Question about the enum type |
Previous Message | Tatsuo Ishii | 2008-02-18 00:43:14 | Re: character conversion problem about UTF-8-->SHIFT_JIS_2004 |