Thursday, November 5, 2015

ICMC15: SP800-90B: Analysis of Linux /dev/random

Stephan Mueller, Principal Consultant and Evaluator, atsec information security

Not only does Mr. Mueller work at atsec, but he also is a Linux kernel developer in his spare time. 

How do we define the noise sources for /dev/random:
  • Disk I/O
    • Block device number + 0x100 ||Jiffies || High-Resolution Timer
    • Noise derived from access times to spinning platters
    •  Human Interface Devices (HID)
    •  Interrupts
      • fast_pool: 4 32-bit words filled with 
        • Jiffies 
        • high resolution timer
        • IRQ number
        •  64 bit Instruction pointer of the CPU.
    The output function is a SHA1 hash run over the output and put in the output pool.

    How do we condition? The conditioner mixes data into input-pool, a non-approved mechanism according to SP800-90B. LFSR for 128 * 32-bit pool with full rank polynomial.  The goal is for bias reduction. The Disk/HID: bias added due to structure of data to be mixed into input_ool with MSB with rather static data (event numbers), middle bits with low entropy data (jiffies) and LSB with high-entropy data (high resolution timer.

    There is a health test, but it is not compliant with SP800-90B.  they measure disk/HID and derivation of event time in jiffies. Minimum of all derivation is estimated entropy of event.  Limitation to 11 bits maximum per event. There is a claim that one bit of input data has less than one bit of entropy.  So, this covers continuous health test requirement and repetition count requirement (even though, again, it's not the test SP800-90B actually asks for).


    All of these derivations are applied to all events, including startup - so, we kind of have a start up test.

    Interrupts: implicit, without interrupt handling there is no live Linux kernel :-) 

    There is no adaptive proportion test - entropy measurement considered equivalent.

    The goal of identifying failures is met.

    How to observe noise sources and LFSR/input_pool in a live Linux kernel, and unchanged Linux kernel and not ipact entropy calculation - answer: SystemTap! Though Heisenberg's uncertainty still applies, as it's changing timing by adding additional code paths. Akin to executing Linux on a slower system, therefor no impact on entropy estimates. [Note: I'm not sure I agree with that theory.]

    Their test results:
    General: Chi-Square for IID cannot be calculated and Chi-Square Goodness Fit cannot be calculated. For Disk/HID, only high-resolution timer tested due to main entropy source - other items were zero or close to zero entropy. But, when using lower 11 bits of timer, data is almost IID.  Looking at interrupts, each 32-bit word is tested individually. About half of IID test fail -> non-IID. 

    General conditioner of IID: collision tests N/A due to no collisions, and all sanity checks always pass. 

    Is the /dev/random on Linux good enough? Noise sources: using the min entorpy values for events and considering the maximum of 11 bits of entropy estimated per event, we pass SP800-90B. Additonal testing beyond SP800-90B shows that more than 6% of events are estimated to have 0 bits of entropy. About 20% of events are estimated to have 1 bit of entropy. Massive underestimation of entropy in noise sources.  For the conditioner: no change from noise sources. With entropy estimator enforced when obtaining random values - preventing random number output when there is too little noise. The passing of the noise source implies passing of the entire random number generator. Other pools feeding /dev/random and /dev/urandom are DRNGs!

    Assumption of a certain structure of an entopy source. Straigh data path from noise source to output, and it's difficult to apply to noise sources maintaining an entropy pool

    Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!