From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Dmitry Igrishin <dmitigr(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Practical usage of large objects. |
Date: | 2020-05-14 13:36:19 |
Message-ID: | 72a02fe6b1b78120136cf7f96cd35c0c2ed2f4d2.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, 2020-05-14 at 12:59 +0900, Michael Paquier wrote:
> On Wed, May 13, 2020 at 01:55:48PM -0400, Tom Lane wrote:
> > Dmitry Igrishin <dmitigr(at)gmail(dot)com> writes:
> > > As you know, PostgreSQL has a large objects facility [1]. I'm curious
> > > are there real systems which are use this feature?
> >
> > We get questions about it regularly, so yeah people use it.
>
> I recall that some applications where I work make use of it for some
> rather large log-like data. At the end of the day, it really boils
> down to if you wish to store blobs of data which are larger than 1GB,
> the limit for toasted fields, as LOs can be up to 4TB. Also, updating
> or reading a LO can be much cheaper than a toasted field, as the
> latter would update/read the value as a whole.
Interesting; only recently I played with that a little and found that
that is not necessarily true:
https://www.cybertec-postgresql.com/en/binary-data-performance-in-postgresql/
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Support | 2020-05-14 13:41:14 | Re: Reuse an existing slot with a new initdb |
Previous Message | Tom Lane | 2020-05-14 13:33:40 | Re: Reuse an existing slot with a new initdb |