From: | Christophe Courtois <christophe(dot)courtois(at)dalibo(dot)com> |
---|---|
To: | Donato Marrazzo <donato(dot)marrazzo(at)gmail(dot)com>, Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
Cc: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: How to get more than 2^32 BLOBs |
Date: | 2020-04-08 11:41:26 |
Message-ID: | e39b2ce7-1974-6e06-da3e-f92a69d702a9@dalibo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi,
Le 08/04/2020 à 12:12, Donato Marrazzo a écrit :
> Hi Laurenz,
> thank you for your reply.
> Are you aware of any performance drawback?
We had a customer with millions of small Large Objects, in part because
his application forgot to unlink them.
As a consequence, pg_dump was using huge amounts of memory, making a
backup impossible. It was with PG 9.5, I don't think the situation
improved since.
> Il giorno mer 8 apr 2020 alle ore 12:06 Laurenz Albe
> <laurenz(dot)albe(at)cybertec(dot)at <mailto:laurenz(dot)albe(at)cybertec(dot)at>> ha scritto:
...
> > I'm working on a use case were there are many tables with blobs
> (on average not so large 32KB).
> > I foresee that in 2-3 years time frame, the limit of overall blobs
> will be breached: more than 2^32 blobs.
> > - Is there a way to change the OID limit?
> > - Should we switch to a bytea implementation?
> > - Are there any drawback of bytea except the maximum space?
> Don't use large objects. They are only useful if
> 1) you have files larger than 1GB or
> 2) you need to stream writes
>
> There are no such limitations if you use the "bytea" data type, and
> it is much simpler to handle at the same time.
+1
--
Christophe Courtois
Consultant Dalibo
https://dalibo.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2020-04-08 12:04:59 | Re: How to get more than 2^32 BLOBs |
Previous Message | Axel Rau | 2020-04-08 11:30:41 | logical replication on 12.2: failed to increase restart lsn: |