From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Karsten Hilbert <Karsten(dot)Hilbert(at)gmx(dot)net> |
Cc: | Sai Teja <saitejasaichintalapudi(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Re: Fatal Error : Invalid Memory alloc request size 1236252631 |
Date: | 2023-08-17 15:54:22 |
Message-ID: | CAFj8pRA4kOVANXa9dzX80_ChQ=itDJ1v2DmcXXvMB4xQVkhQ8w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi
čt 17. 8. 2023 v 16:48 odesílatel Karsten Hilbert <Karsten(dot)Hilbert(at)gmx(dot)net>
napsal:
>
> Even I used postgreSQL Large Objects by referring this link to store and
> retrieve large files (As bytea not working)
> https://www.postgresql.org/docs/current/largeobjects.html
>
> But even now I am unable to fetch the data at once from large objects
>
> select lo_get(oid);
>
> Here I'm getting the same error message.
>
> But if I use select data from pg_large_object where loid = 49374
> Then I can fetch the data but in page wise (data splitting into rows of
> each size 2KB)
>
> So, here how can I fetch the data at single step rather than page by page
> without any error.
>
SQL functionality is limited by 1GB
You should to use \lo_import or \lo_export commands
or special API https://www.postgresql.org/docs/current/lo-interfaces.html
regards
Pavel
> And I'm just wondering how do many applications storing huge amount of
> data in GBs? I know that there is 1GB limit for each field set by
> postgreSQL. If so, how to deal with these kind of situations? Would like to
> know about this to deal with real time scenarios.
>
>
>
> https://github.com/lzlabs/pg_dumpbinary/blob/master/README.md
> might be of help
>
> Karsten
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2023-08-17 15:54:45 | Re: Schema renaming cascade |
Previous Message | Marc Millas | 2023-08-17 15:47:34 | shared buffers |