From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Anna Akenteva <a(dot)akenteva(at)postgrespro(dot)ru>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] [bug-fix] Cannot select big bytea values (~600MB) |
Date: | 2018-02-27 20:58:34 |
Message-ID: | CA+TgmoYRnY3g_Ab9uFDezxXyuUg3ZPyvjK6vR2uqZoSsDm7=tw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Feb 27, 2018 at 2:17 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> +1. We don't have to support everything, but things that don't work
>> should fail on insertion, not retrieval. Otherwise what we have is
>> less a database and more a data black hole.
>
> That sounds nice as a principle but I'm not sure how workable it really
> is. Do you want to reject text strings that fit fine in, say, LATIN1
> encoding, but might be overlength if some client tries to read them in
> UTF8 encoding? (bytea would have a comparable problem with escape vs hex
> representation, for instance.) Should the limit vary depending on how
> many columns are in the table? Should we account for client-side tuple
> length restrictions?
I suppose what I really want is to have a limit that's large enough
for how big the retrieved data can be that people stop hitting it.
> Anyway, as Alvaro pointed out upthread, we've been down this particular
> path before and it didn't work out. We need to learn something from that
> failure and decide how to move forward.
Yep.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2018-02-27 21:03:17 | Re: [HACKERS] [POC] Faster processing at Gather node |
Previous Message | Thomas Munro | 2018-02-27 20:58:25 | Re: Registering LWTRANCHE_PARALLEL_HASH_JOIN |