Wednesday, November 4, 2015

ICMC15: Validating a Virtual Module Without Guidance from CMVP

Steve Ratcliffe, TME, Cisco Systems

The concept of virtual computing came into being in the 1960s by IBM and MIT. It's come a long way since then - a long way just since 2001!  It is replacing physical machine infrastructures. The government is asking for this,  but there's not really any guidance from CMVP - so how do we do this?

Virtual systems are based on real physical normal systems, typically.  Is it hardware? Firmware or software?  CMVP will ask this question. 

Their system is built on op of hardware which is running an OS with executables. Then on top, a virtual OS with more executables and crypto modules. In between - the middle man - the hypervisor.

Webster defines virtual as not formally recognized, not real. But... we have it!

Think more about the difference between the brain and the mind.  The mind exists, but you can't touch and feel it like you can a brain. (ew!)

Customer love this - it takes less space and less power. What's not to love?

But, how do you test? Let alone validate?

Let's think about how FIPS 140-2 maps to the virtual world.

How do you define the boundary?  For software, we'd just identify the crypto module. But, as we're relying on the hypervisor - do we need to include that in the boundary?  It provides input into the virtualized environment, but it is not in the boundary.  Your boundary would be the same as it was on a non virtualized systems, just a bit different.

Firmware cannot be dynamically written or modified. Software can be, according to the CMVP. But, can you prove your virtual module is not modifiable? No... then you have a software module.

You don't own the hardware, you don't own the hypervisor, you don't own the firmware underneath - so you can't prove that the VMs are truly isolated.

What if the OS underneath is a trusted OS? Or you've created the hypervisor?

Situations vary greatly. 

[Note: I think we could have a different story on SPARC, where we do have one company that owns the chips, the hypervisor and the virtualization - but that's not a generic solution.]

The best you can easily get here is going to be level 1, unless you have specific hardware identified to provide physical protection - but then the physical layer is set, you can't change it. Doesn't buy you much.

Roles - keep it simple. Don't take stuff to CMVP that's going to cause them to ask questions :-)  There is an admin role that controls the hypervisor. That doesn't necessarily need to be in the security policy, but you better understand it. 

But, when does power-on start?  When the VM gets loaded, not when the hypervisor gets loaded.

Physical security? Doesn't really apply, as this is not a physical module! :-)

But... what is the operating environment?  We have an OS running on the virtual moduled. THis OS access is controlled by the hypervisor. The hypervisor could be controlling other VMs. Access to the hypervisor is still a questionable thing.

One of the requirements is you need to have a single operator mode of operation - CMVP says if you're in a server type setting, the server is the operator. [Note: huh?]  You have to say it's a server role - a process.   You do need to set this up as a protected environment. See IG 6.1 for more details.

A concern from the audience: what about entropy?  Could you starved by other VMs or by attacks on the hypervisor? These are all things you have to look at.

Key management should be just about the same. But, it's a little bit harder to talk about key storage,
like your keys will have to stay in the virtual module.

But, RNG is a very real concern. CMVP is recently really critical of this, though it really is a long term problem. Can you ask the hypervisor for entropy? We can't just trust everything.  Ideally, we like to get entropy from the hypervisor - but with a virtual system, it can't get directly to the hw entropy source.

Software entropy sources can be used to get around these unknowns - but is it as good?

There are multiple types of hypervisors: Type I and Type II. Type II is virtualization on top of a host OS (I would think about our classic zones).

VMware uses timing for entropy, KVM, XEN and Hyper-V are similar... but different. 

Each hypervisor should be treated as a new entropy process and source, each one should be tested and documented.

Hypervisor attacks are rare, but they are not immune. They are vulnerable at the network layer and from the host running on that hypervisor.

If they hypervisor is attacked, all of the VMs and the host itself are accessible to the attacker.  So, we really have to test these!

For self tests, you have to really think: when is power-on?  When the hypervisor starts?

Think also about how the module is delivered - with or without the hypervisor?

You may also want to address how you mitigate attacks against the hypervisor.

There is no guidance today, but that doesn't mean you can't validate. You just need to be honest and up front, and think about this in advance.

Last caveat: be careful what your sales team sells!  They probably won't understand FIPS and may sell an incorrect configuration.


Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!