>  To protect against back to back measurements and userspace
>  observation, we insist that at least one context switch has occurred
>  since we last sampled before we trust a sample.
This sounds particularly obnoxious, since it is entirely possible to have 
an idle machine that is just waiting for more entropy, and this will mean 
(for example) that such an idle machine will effectively never get any 
more entropy simply because your context switch counting will not work.
This is particularly true on things like embedded routers, where the 
machine usually doesn't actually _run_ much user-level software, but is 
just shuffling packets back and forth. Your logic seems to make it not add 
any entropy from those packets, which can be _deadly_ if then the router 
is also used for occasionally generating some random numbers for other 
things.
Explain to me why I should consider these kinds of draconian measures 
acceptable? It seems to be just fascist and outright stupid: avoiding 
randomness just because you think you can't prove that it is random is not 
very productive.
We might as well get rid of /dev/random altogether if it is not useful. 
		Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/