Helmut Kurth, Chief Scientist, atsec information security
Illusions can be fun ... if we know they are an illusion. They can make us feel good... even if they are bad. They can make us think something is okay... when it's really broken.
There are some illusions we like in technology, like virtualization! The illusion is that we have more resources than we really have. We can save money - that's good, right? Some people think that virtualization provides more security - but does it really? Vendors claim their virtualized systems are just like a real hardware box... but are they? Often not exactly. Something has to give, we should understand this.
How is entropy impacted once you virtualized the system? Virtualizaion can change the timing, it may just behave differently. Either way, are we getting the same entropy on these systems?
Often, we make incorrect assumptions about things like timing and the similarity of a virtualized system to its true hardware counterpart.
For example, if you're using time as your entropy source - you may assume the lowest order bits are changing most frequently and will provide more actual entropy. But, what if this is not the true timer? What if a hypervisor is intercepting and interpreting the concept of "time" - what if the hypervisor should not be trusted?
Shouldn't you be able to trust your hypervisor? Once someone has breached the hypervisor, they can do all kinds of evil things underneath your VM and you won't be able to easily detect it (as your OS will be unchanged).
For example, the RDRAND instruction can be intercepted by a hypervisor. This is a "feature" documented by Intel. So, as a user of the VM, you think you're getting some pretty good entropy from RDRAND - but you're really getting poor entropy from your hypervisor. How could you detect this? Intel's RDRAND is often used as the sole source of randomness with no DRNG postprocessing (like in OpenSSL), regardless if the "randomness" is being used for generating nonces or for generating cryptographic key material.
Assuming a compromised hypervisor, the bad guy can have the key used to generate the "random" sequences used in the RDRAND emulation. He can use this to generate the different random streams. He is able to get the nonce, which is transmitted in the clear.
Launching the attack requires installation of a hypervisor or the modification of a running hypervisor. Just one flaw that allows the execution of privileged code is all it takes. The hacker, this this case, may have the session key even before the communicating parties have it! At this point, it doesn't matter what algorithm you're using - communications are essentially in the clear.
This attack is not that easy - it requires taking over the hypervisor (not easy), and once you have the hypervisor, you can do anything you want! But, this isn't about taking down the machine, this is about eavesdropping undetected for any length of time.
This is a sneaky attack - virus scanner and host-based IDS will tell you everything is okey dokey! It is independent of the OS or ay applications (as many rely on RDRAND) .
How can you protect yourself? The basic solution is diversification - do not use a single source of entropy. Use a different RNG for nonce generation and generation of key material, and use RDRAND for seeding (with other sources) rather than directly for generating critical security parameters. Read the hardware documentation carefully - make sure you understand what you're getting.
Intel isn't going to fix this, as this isn't a bug... it's a feature!
Entropy analysis isn't just about the mathematical entropy analysis - be aware of how you procure the numbers and how they are used in your system.
As a side note, we (in Solaris land) are aware that this is a tough problem and hard to get right and we've already implemented counter measures. Darren Moffat did a great write up on Solaris randomness and how we use it in the Solaris Cryptographic Framework, describing the counter measures we already have in place.
Testing 1, 2, 3 - Dropsafe is now entirely solid-state…