From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Steve Atkins <steve(at)blighty(dot)com>, pgsql-general General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: more than 2GB data string save |
Date: | 2010-02-10 06:51:37 |
Message-ID: | 162867791002092251x6d5b405ctd5145f08d1d5b377@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
2010/2/10 Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:
> On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins <steve(at)blighty(dot)com> wrote:
>>
>> On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
>>
>>> On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman <rummandba(at)gmail(dot)com> wrote:
>>>> How to save 2 GB or more text string in Postgresql?
>>>> Which data type should I use?
>>>
>>> If you have to you can use either the lo interface, or you can use
>>> bytea. Large Object (i.e. lo) allows for access much like fopen /
>>> fseek etc in C, but the actual data are not stored in a row with
>>> other data, but alone in the lo space. Bytea is a legit type that you
>>> can have as one of many in a row, but you retrieve the whole thing at
>>> once when you get the row.
>>
>> Bytea definitely won't handle more than 1 GB. I don't think the lo interface
>> will handle more than 2GB.
>
> That really depends on how compressible it is, doesn't it?
>
no. It is maximal length for varlena. TOAST is next possible step.
Regards
Pavel Stehule
p.s.
processing very large SQL values - like bytea, or text longer tens
megabytes is very expensive on memory. When you processing 100MB
bytea, then you need about 300MB RAM, Using a bytea over 100MB is not
good idea. LO interface is better and much more faster.
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
From | Date | Subject | |
---|---|---|---|
Next Message | karsten vennemann | 2010-02-10 06:53:42 | dump of 700 GB database |
Previous Message | Greg Smith | 2010-02-10 06:51:12 | Re: Best way to handle multi-billion row read-only table? |