From: | Jim Nasby <jim(at)nasby(dot)net> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com>, Florian Pflug <fgp(at)phlo(dot)org> |
Cc: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Marko Tiikkaja <marko(at)joh(dot)to>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: plpgsql.consistent_into |
Date: | 2014-01-14 01:10:49 |
Message-ID: | 52D48E99.8000703@nasby.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 1/13/14, 7:06 PM, Josh Berkus wrote:
> On 01/13/2014 04:20 PM, Jim Nasby wrote:
>> On 1/13/14, 5:57 PM, Josh Berkus wrote:
>>> I *really* don't want to go through all my old code to find places where
>>> I used SELECT ... INTO just to pop off the first row, and ignored the
>>> rest. I doubt anyone else does, either.
>>
>> Do you regularly have use cases where you actually want just one RANDOM
>> row? I suspect the far more likely scenario is that people write code
>> assuming they'll get only one row and they'll end up with extremely hard
>> to trace bugs if that assumption is ever wrong.
>
> Regularly? No. But I've seen it, especially as part of a "does this
> query return any rows?" test. That's not the best way to test that, but
> that doesn't stop a lot of people doing it.
Right, and I certainly don't want to force anyone to rewrite all their code. But I'd certainly like a safer default so people don't mistakenly go the "multiple rows is OK" route without doing so very intentionally.
--
Jim C. Nasby, Data Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net
From | Date | Subject | |
---|---|---|---|
Next Message | James Bottomley | 2014-01-14 01:13:51 | Re: [Lsf-pc] Linux kernel impact on PostgreSQL performance |
Previous Message | Kevin Grittner | 2014-01-14 01:10:26 | Re: Disallow arrays with non-standard lower bounds |