Re: pg_upgrade with large pg_largeobject table

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Mate Varga <m(at)matevarga(dot)net>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: pg_upgrade with large pg_largeobject table
Date: 2018-08-14 18:16:15
Message-ID: 7806.1534270575@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Mate Varga <m(at)matevarga(dot)net> writes:
>> Using the large-object API for things that tend to not actually be very
>> large (which they aren't, if you've got hundreds of millions of 'em) is an
>> antipattern, I'm afraid.

> I know :( So maybe I need to do some refactoring in the application and
> inline the lobs. The data is binary data with very high entropy (encrypted
> stuff). Would you recommend bytea for that?

Yeah, it'd likely be less of a pain-in-the-neck than text. You would need
some sort of encoding anyway to deal with zero bytes and sequences that
aren't valid per your encoding, so you might as well go with bytea's
solution.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stephen Frost 2018-08-14 18:27:23 Re: pg_basebackup failed to read a file
Previous Message Mate Varga 2018-08-14 18:06:36 Re: pg_upgrade with large pg_largeobject table