From: | Marco Colombo <marco(at)esi(dot)it> |
---|---|
To: | Bruno Wolff III <bruno(at)wolff(dot)to> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Arnau Rebassa <arebassa(at)hotmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Random not so random |
Date: | 2004-10-05 09:27:05 |
Message-ID: | Pine.LNX.4.61.0410051033490.14637@Megathlon.ESI |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 4 Oct 2004, Bruno Wolff III wrote:
> On Mon, Oct 04, 2004 at 18:58:41 +0200,
> Marco Colombo <pgsql(at)esiway(dot)net> wrote:
>>
>> Actually, that should be done each time the random() function
>> is evaluated. (I have no familiarity with the code, so please
>
> That may be overkill, since I don't think that random has been advertised
> as a secure or even particularly strong random number generator.
>
>> bear with me if the suggestion is unsound). I'd even add a parameter
>> for "really" random data to be provided, by reading /dev/random
>> instead of /dev/urandom (but read(2) may block).
>
> You don't want to use /dev/random. You aren't going to get better random
> numbers that way and blocking reads is a big problem.
Sure you are. As far as the entropy pool isn't empty, /dev/random
won't block, and thus there's no difference in behaviour.
When you're short of random bits, /dev/random blocks, /dev/urandom
falls back to a PRNG + hash (I think SHA1). Under these conditions,
/dev/urandom output has 0 "entropy" at all: an attacker can predict
the output after short observation provided that he can break SHA1.
That is, anything that uses /dev/urandom (when the kernel pool is
empty) is just as safe as SHA1 is.
I agree that for a general purpose 'good' random() function,
/dev/urandom is enough (as opposed to a plain-old PRNG).
In some applications, you may need the extra security provided
by /dev/random: its output (_when_ is available) it's always
truly random (as long as you trust the kernel, of course - there
have been bugs in the past in Linux about overestimating the randomness
of certain sources, but they've been corrected AFAIK).
>> How about the following:
>> random() = random(0) = traditional random()
>> random(1) = best effort random() via /dev/urandom
>> random(2) = wait for really random bits via /dev/random
>
> It might be nice to have a secure random function available in postgres.
> Just using /dev/urandom is probably good enough to provide this service.
Why not all of them. The problem is how to handle a potentially
blocking read in /dev/random (actually _any_ disk read may block
as well). Just warn people not to use random(2) unless they really
know what they're doing...
I don't think the read syscall overhead is noticeable (in Linux at least).
But for sure we can't afford to _open_ /dev/urandom each time...
backends will have to keep an extra fd open just for /dev/urandom... hmm...
I can't think of any better way of doing that.
.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ Colombo(at)ESI(dot)it
From | Date | Subject | |
---|---|---|---|
Next Message | Marco Colombo | 2004-10-05 09:38:46 | Re: LOST REFERENTIAL INTEGRITY |
Previous Message | frederic.germaneau | 2004-10-05 09:21:51 | table localisation |