Thursday, November 5, 2015

ICMC15: Effective Cryptography—Or: What's Wrong With All These Crypto APIs?

Thorsten Groetker, CTO, Utimaco

Effective cryptography means it needs to work and be secure, but it also has to get you where you need to go quickly and calculate the results fast 

There are many well known crypto APIS - but there's something wrong with all of the.

PKCS#11 and security issues. There are numerous key extraction attacks known. Jolyon Clulow "on the Security of PKCS#11" and the Tookan project ("Attacking and Fixing PKCS#11 Security Tokens". There are CVE entries as well, but they don't necessarily note PKCS#11.

Why does PKCS#11 have these issues? Confusing sest of mechanisms and attributes - you need automated model checkers to determine secure configurations.  Functions are broken into fine grain operations, which opens the door to eavesdropping and insertion attacks. 

[Note: these are the opinions of the speaker, not my own, as with all my blogs from this conference]

PKCS#11 is not the worst of the bunch. These same attacks can be used against Microsoft CryptoAPI (CAPI), JCE/JCA and Mixed APIs. For example, under JCE/JCA there are wrap-decrypt attacks unless they are prevented by underlying devices. For CAPI, exchange keys can be also used to encrypt/decrypt data to open door to wrap-decrypt attacks.

Efficiency is development cost and time to market. If the team is more comfortable with an API, it will be easier for them to adopt and implement. 

"Simplicity is a prerequisite for reliability." - Edsger Dijkstra. (and hence for security)  Authentication should not be an after thought. We need multi-factor and multi-person (Mout of N) authentication. And don't forget about audit logging!

People tend to underesestimate the cost of data transfers - server -> CSP -> Middleware -> Network Appliance -> Driver -> HSM.  

If you implemented your cryptographic functions as atomic HSM commands it will be faster and more secure.

KMIP is trying to come to the rescue with the concept of batched requests.  This addresses some performance issues, but it is not suited as a general crypto programming paradigm.

Crypto apps running within the secure perimeter of an HSM will become the norm. Drivers include security, eas of use, performance, multi-tenancy, custom logging, portability and cost.  Thorsten believes that firewalling and binding a key to a specific app or device will become a hard requirement.

A PKCS#11 host program will have access to 50+ functions, 200 attributes [XXX - something got lost here - mechs were also mentioned] [Note: I don't believe there is really any PKCS#11 library that implements every mech, every attribute, every functions].

Don't forget how dramatically an easy-to-use API combined with firewalling and enabled 3rd party apps can change an established market. Think about how much things have changed from our old Nokia cell phones with buttons to an Android or iOS phone.

Managed languages make this even easier.

But are people really going to develop embedded apps for HSMs?

Introducing CryptoScript: You eed to write a script, load the signed script (automatically compiled under the hood and executed one where it spawns threads and registers functions as commands), and invoke newly registered CryptoScript commands from the host application in high level languages.

Inside the utimaco HSM there's a boot loader, OS, administrative modules , cryptographic modules and SXI (Cryptographic eXtended services Interface). CryptoScript is on top of all that.

The basic concept is to be small, efficient and portable under the MIT license for easy portability.  The language was pared down by removing application program interfaces, native debug I/F, aux lib and OS facilities. They've enhanced this by adding secure managed memory, command handling, authentication and secure messaging. 

CryptoScript does not allow direct memory addressing and no buffer/stack overflows.

Once you've got CryptoScript Modules - they are loaded in a virtual HSM, so they cannot direct the actual HSM file system and memory.

A question from the audience: If PKCS#11 sucks, due to complexity - how does the problem go away just because we implement it on the HSM?  Answer: the attack vector is removed, CryptoScript has better debugging, and it's faster to develop.
 
Post by Valeie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: Department of Defense Cybersecurity

Marianne Bailey, Principal Director, Deputy CIO for Cybersecurity, Department of Defense

Their main goal: dependable execution in the face of  adversity.  Now everyone is focused on cyber security - at the highest level of security. Bailey is in the white house several times a week to non technical 4 star generals.

Cryptography is critical to the DoD.

Attacks by state and non-state actors are increasing each year, putting all of our assets at risk.  This results in loss of personal data and network outages. Anything with a computer can be attacked. This is not just an IT problem - many things are connected to each other.

We need to establish a culture of cyber discipline - breaches are often human error. Everyone has to understand the risk and must be accountable. Cyber hygiene: configure all computers to DoD standards, make sure every computer is protected, and eliminating the use of passwords by all users and administrators and instead using credentials issued by DoD.

Things haven't been going as fast as they want.  So, really putting alot of effort behind PKI infrastructure to get rid of passwords and role based access control. you can't do two things requiring different levels of privilege at the same time.

We see impersonations, privilege escalation. We have to watch, be aware and be able take action quickly.

Right now, too much of the nation is on the wrong side of these initiatives, which  means the attackers don't have to spend as much money or effort to attack us - but we still have to pay the price afterwards.

Commercial products should be implementing standards based cryptography, and follow other security standards.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: Cryptography, Moore’s Law, and Hardware Foundations for Security

Paul Kocher, President, Chief Scientist​, Cryptography Research

While crypto has continued to get better, and protocols have improved - we still are having massive security breaches.   the more complex our code get, the larger the code base gets - we increase our odds of more bugs.  If I have twice as much code, but the same amount of time to look at it, then the odds of missing bugs goes up much faster than linerally.

The Silver Bridge on US 35 in Ohio, built in 1924, collapsed in 1967, due to an engineering issue - gave us the term "fracture critical components".  How many "fracture-critical" elements are in a typical IoT device? DRAM, flash, storage, CPU logic, support logic. We're talking about billions of possible issues.

Our ability to understand simple elements often creates a false impression that we understand the complex system.just because you understand transistors, does not mean you understand a processor, let alone machine language.

We all make a bad assumption: software will be bug-free.  Almost every device you have is 1-3 bugs away from total compromise.Defect densisties re higher than we think.

For side channels, we mistakenly think that attackers only see the binary input/output data - not true! Power and RF measurements show tiny correlations to individual gates. 

Four properties for solutions to succeed. Hardware-based: it's the only layer where we know how to build reliable security boundaries. Deployable additively: legacy designs can't be abandoned, but ar e too complex to fix.  Addres your infrastructure: solutions that must address both in-device capabilities and manufacturing/lifecycle.  Best case: must have a very positive ROI. All stakeholders must benefit, and benefits must not depend on ubiquity.

We need to think about our security perimeters. If you put too much complexity in one boundary, catastrophic failure is more likely. We need to think like the military for how they store their ordnance - if one layer fails, nobody will have all of the munitions.

Software security is not scalable. No hope of eliminating bugs in existing software. CPU modes (like TrustZone, Ring0) haven't helped, despite decades of trying. Separate chips/modules only work for a small subset of uses cases - costly and distant from where security is needed.  He has the most hope fo ron-die security modules. 

Why? High performance, low power. When something is inside the die, it's cheaper to manufacture, lower latency, better performance.

The cost of putting in extra transistors to do security is largely immaterial today - and if you're bothered by it now, just wait 18 months. :-)

Chipmakers required solutions for in-device security and also solutions for enabling infrastructure. Cryptography Research's approach: CryptoManager Solution. Their CryptoManager protects the chip.

If you get twice the code, you aren't getting twice the most desirable features. You added the core features first!  BUT, by adding twice the number of lies of code, you're increasing complexity and adding bugs.

After all of these security breaches, some government agencies in other countries have switched back to the typewriter - it's more secure.

We have to come up with solutions, otherwise security risks will erase net benefits from new technology.  For example, what is the risk vs reward of having your air conditioner connected to Internet. It's great to have the repair automatically called when there is trouble, but does it create an attack vector into your house or office?

All of these security issues means job security for the people in this room, but we should sill try to do beter. :-)

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: Keynote: Current Issues in Cryptography

This year's conference has over 200 attendees from more than 18 countries and 18 sponsors. Oracle is one of them - we're sponsoring lunch today. :-)

Phil Zimmermann, Creator of PGP, Co-founder, Silent Circle

The last time I saw Phil speak was at DefCon2 - so I'm very happy to see him again!

Phil contested his introduction, which said PGP is the most widely used crypto software - because it's only for email, and nobody encrypts their email.  We don't have to worry about NSA cracking our crypto, because we're simply not using it.

People are more aware now that we are being watched, in part due to the revelations from Edward Snowden.The industry is coming up with new ways to push pervasive use of encryption.

Public Key infrastructure has spectacular failures, like an Iranian hacker subverting the system, turning over keys to the Iranian government, who then did man in the middle attacks against Iranian dissidents. 

Quantum computing is like nuclear fusion - we've been 5 years away from nuclear fusion for the last 50 years.  If we do actually get quantum computing, perhaps we'll actually get nuclear fusion! :-)

Phil's been thinking about quantum computing. When designing ZRTP, a protocol used for secure telephony, uses a federal Diffie-Hellman exchange and destroys the keys at the end of the call. There are elements in the protocol to deflect hackers - like only one chance to use a hash.

While we don't have quantum computing today, there's nothing to stop a government from storing your encrypted data from today to process later when computing power improves.  Intelligence agencies have a long term way of thnking and a lot of patience.

Had to develop ZRTP with the components he has at his disposal: DH ECC, and block ciphers.  Want to be able to set up new calls quickly, and still securely. Lots of things to think about here.

Zimmerman is finding that it's not just regular people wanting to hide their traffic from the intelligence community - corporations are worried, too.

Phil laments that we're losing our natural inclination for conversation privacy, thanks to the widespread use of telephones.  We shouldn't trust the phone companies. We should be able to still, essentially, whisper in each other's ears.

Even experienced cryptographers struggle to understand and explain trust models. Any crypto scheme that relies on the end user understanding a trust model will fail.  It needs to be so easy that anyone, even non technical people, can use it.

We focus on coming up with hard mathematical equations that wwe imagine our opponents factcing - but that's not how the NSA thinks about this.. They think of this as an engineering problem.  Thinking of harder math problems won't get us very far. They are fun, we feel smug - but that's not how the NSA works.

We have to think like the intelligence agencies - it can't just be about hard math.
 
 Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

Wednesday, November 4, 2015

ICMC15: Validating a Virtual Module Without Guidance from CMVP

Steve Ratcliffe, TME, Cisco Systems

The concept of virtual computing came into being in the 1960s by IBM and MIT. It's come a long way since then - a long way just since 2001!  It is replacing physical machine infrastructures. The government is asking for this,  but there's not really any guidance from CMVP - so how do we do this?

Virtual systems are based on real physical normal systems, typically.  Is it hardware? Firmware or software?  CMVP will ask this question. 

Their system is built on op of hardware which is running an OS with executables. Then on top, a virtual OS with more executables and crypto modules. In between - the middle man - the hypervisor.

Webster defines virtual as not formally recognized, not real. But... we have it!

Think more about the difference between the brain and the mind.  The mind exists, but you can't touch and feel it like you can a brain. (ew!)

Customer love this - it takes less space and less power. What's not to love?

But, how do you test? Let alone validate?

Let's think about how FIPS 140-2 maps to the virtual world.

How do you define the boundary?  For software, we'd just identify the crypto module. But, as we're relying on the hypervisor - do we need to include that in the boundary?  It provides input into the virtualized environment, but it is not in the boundary.  Your boundary would be the same as it was on a non virtualized systems, just a bit different.

Firmware cannot be dynamically written or modified. Software can be, according to the CMVP. But, can you prove your virtual module is not modifiable? No... then you have a software module.

You don't own the hardware, you don't own the hypervisor, you don't own the firmware underneath - so you can't prove that the VMs are truly isolated.

What if the OS underneath is a trusted OS? Or you've created the hypervisor?

Situations vary greatly. 

[Note: I think we could have a different story on SPARC, where we do have one company that owns the chips, the hypervisor and the virtualization - but that's not a generic solution.]

The best you can easily get here is going to be level 1, unless you have specific hardware identified to provide physical protection - but then the physical layer is set, you can't change it. Doesn't buy you much.

Roles - keep it simple. Don't take stuff to CMVP that's going to cause them to ask questions :-)  There is an admin role that controls the hypervisor. That doesn't necessarily need to be in the security policy, but you better understand it. 

But, when does power-on start?  When the VM gets loaded, not when the hypervisor gets loaded.

Physical security? Doesn't really apply, as this is not a physical module! :-)

But... what is the operating environment?  We have an OS running on the virtual moduled. THis OS access is controlled by the hypervisor. The hypervisor could be controlling other VMs. Access to the hypervisor is still a questionable thing.

One of the requirements is you need to have a single operator mode of operation - CMVP says if you're in a server type setting, the server is the operator. [Note: huh?]  You have to say it's a server role - a process.   You do need to set this up as a protected environment. See IG 6.1 for more details.

A concern from the audience: what about entropy?  Could you starved by other VMs or by attacks on the hypervisor? These are all things you have to look at.

Key management should be just about the same. But, it's a little bit harder to talk about key storage,
like your keys will have to stay in the virtual module.

But, RNG is a very real concern. CMVP is recently really critical of this, though it really is a long term problem. Can you ask the hypervisor for entropy? We can't just trust everything.  Ideally, we like to get entropy from the hypervisor - but with a virtual system, it can't get directly to the hw entropy source.

Software entropy sources can be used to get around these unknowns - but is it as good?

There are multiple types of hypervisors: Type I and Type II. Type II is virtualization on top of a host OS (I would think about our classic zones).

VMware uses timing for entropy, KVM, XEN and Hyper-V are similar... but different. 

Each hypervisor should be treated as a new entropy process and source, each one should be tested and documented.

Hypervisor attacks are rare, but they are not immune. They are vulnerable at the network layer and from the host running on that hypervisor.

If they hypervisor is attacked, all of the VMs and the host itself are accessible to the attacker.  So, we really have to test these!

For self tests, you have to really think: when is power-on?  When the hypervisor starts?

Think also about how the module is delivered - with or without the hypervisor?

You may also want to address how you mitigate attacks against the hypervisor.

There is no guidance today, but that doesn't mean you can't validate. You just need to be honest and up front, and think about this in advance.

Last caveat: be careful what your sales team sells!  They probably won't understand FIPS and may sell an incorrect configuration.


Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: Introduction to FIPS 140-2

Steve Weingart, Manager of Public Sector Certifications, Aruba Networks; Chris Keenan, Evaluator, Gossamer Security Solutions

FIPS area  series of U.S. Federal Information Processing Standards. Required by federal agencies, and recommended to state agencies. Financial companies are also requiring this. This is a joint scheme with Canada.

FISMA , put forth by George W. Bush in 2002, now makes it required.  The standard precludes the use of unvalidated cryptography in the federal government - it's considered "plain text" (ie no protection). If an agency violates this, they could lose their funding (though that hasn't happened, yet).

The CMVP, Cryptographic Module Validation Program, is a joint program between U.S. NIST (National Institute of Standards and Technology) and CSEC from Canada. The logo for a validation is the 50 stars for the US with a Canadian maple leaf on the front. :-)

The US Government is not staffed to do all of this testing, so they let accredited  labs do it (NVLAP, National Voluntary Laboratory Scheme).  Then NIST can review the results. The labs are checked out every 2 years, including verbal tests.


The labs are not allowed to do any consulting, to prevent conflict of interest.  This is different than Common Criteria.  You can use a lab as a consultant, but then you need to use a different lab for your testing.

FIPS 140-2 is a conformance test, not a evaluation.

It is a back and forth process - never linear.

A cryptographic module is defined by its: Security Functionality and boundaries.  Choosing your boundary is extremely critical, particularly when it comes to making changes after you go into the process or have your certificate. A crypto module can be embodied in hardware, software, firmware or a hybrid.

Your goal is to protect against unauthorized use and unauthorized disclosure of secrets. We also want to make sure that the code has not be modified (integrity check) and prevent insertion and deletion of keys (critical security paramaters or CSPs).

History - we had Federal Standard 1027 in 1982, focusing on making secret phone calls.  then they started thinking about crypto, they turned 1027 into FIPS 140. Nothing was validated against it.  Eleven years later, we got FIPS 140-1 in 1994. It is meant to be reviewed and renewed every 5 years.  In 1999, the review started of FIPS 140-1 and we got FIPS 140-2 in 2001 - just a bit behind schedule.  There was a FIPS 140-3, but it seemed to get buried in the comments.

Then in 2006, work started on ISO/IEC 19790 - second edition came out in 2012. It's come up that ISO/IEC 19790 may become FIPS 140-3.

As you can see, we're behind on updates. [Note: and boy has crypto technology and CPU technology changed a LOT since 2001!]

There is the base document, FIPS 140-2. Read it once, keep it handy - but it won't be your end all guidance.  Focus right now is on Annex A (approved security functions), Annex B (approved p rotection profiles), Annex C (approved RNG) and Annex D (approved key establishment techniques).

Appendix B is pretty empty, as FIPS 140-2 has become detached from common criteria.

DTRs (Derived Test Requirements) is your daily document, that and Implementation Guidance.  Some people think the IG is more important than the DTRs - not true, both important.

If we do switch to ISO/IEC 19790, then the IG should revert to a blank document.

Each area in a module is validated between security levels 1-4.  The Overall Security Level is the lowest level awarded in any of the levels.  Most people just validate all components at the same level, but some customers (like the US Postal Service) will specify higher levels required for some functions.

There are eleven major functional areas. Some are very broad, others, like EMI/EMC are very hardware specific. All of these requirements are hard to "paste on" later.

You can claim "bonus" things - like doing RSA operations in constant time. In order to claim it, your lab has to prove it. Once approved, it can be listed in your certificate.

You must define all entry and exit points. Data Input, data output, control input, status output.  At levels 3 & 4, physical ports must be physically separate.

A cryptographic module shall support authorized roles for opraors and corresponding services within each role. Multiple roles may be assumed by a single operator. If a crypto module supports concurrent operators, then the module has to maintain and track this.

Authentication mechanisms may be required within a cryptographic module to authenticate an operator accessing the module, and to verify that the operator is authorized to assume the requested role within the service of that role.

If you're level 1, no access control is required. Level two is role based, level 3 & 4 have identity based requirements.

You have to actually produce a finite state model. This is easier for old hardware engineer. You can't be in multiple states at one time. Think about the fatal error state: pretty much all states should fail there, if something goes wrong.  This is a bit different with ISO/IEC 19790, where you only have to shut down the stuff having problems.

For physical security, thanks to the credit card breaches from last year, we all have a single chip module in our pockets! (Smart cards embedded in our cards). For level 1, you can get away with using commonly available, well known chips.  A very common thing to use level 2 is a label that is torn if the sytem is opened or tampered with.  Of course, this means hackers, etc, have been focusing on how to remove labels and reapply them with no evidence - so sticker makers have had to step up.

At level 3, you could dip in epoxy or have a pick resistent lockable case.

For level 4, the lab can use ANY method they can imagine to attack your module. This is really, really hard. Do you have a good reason to get this?

There are requirements around the operational environment - the OS your module runs on.  The most common environment is a regular OS. Anything over level 1 is hard, you'll need signed updates and integrity checks. At level 2 and above, folks tend to use a non modifiable operational environment.

Key Management is a big one! Entire life cycle of the key - how are they created? where do they go? Can they be moved? How are you wrapping them?

Key management covers RNG and key generation, key establishment, key distribution, key entry/output, storage and zeroization. You can use the key management of another module.

This can be a problem with compiler optimizations - they will "help" you and leave you key in memory.

There are EMI (Electromagnetic Interference) and EMC (Electromagnetic Compatibility) requirements. EMI is to make sure the module doesn't interfere with other equipment. For EMC, in general, and FCC part 15 class A or B certificate is needed. WARNING: there are lots of fake FCC certificates and counterfeit labs out there.

This can be important as we're now having people analyze EMF signals to predict key materials.

Power up self tests require that you test everything when you start up. If the module fails, then it must enter an error state. All of the operations must be tested "one way" - that is, you cannot encrypt data and then decrypt it and claim success - that only proves you created a reversible operation, not that you're actually doing AES, for example.

There are also conditional tests - when a specific operation is invoked that requires it, like key generation or using random numbers. Again, if it fails - it must enter a failure state and do no further operations.

This will be a little "lazier" in ISO/IEC 19790. Currently, all tests must pass before you do anything. In ISO/IEC 19790 you'll be able to start using SHA2, for example, as soon as it's testing is done - even if other tests are still running.

To prove you've done it correctly, you have to prove to the lab that you can trigger the failure.  Some add a special flag that will always fail, or make a special version of your code that always fails.

For desgn assurance, you need to use the best practices by your developers during the design, deployment and operation of a cryptographic module - like coding standards, reviews, etc.
 
You can claim mitigation of attacks - this is optional - like shielding, timing attacks, etc. While optional, the lab has to verify it/witness it.

Okay - that's all done, what next?  Laboratory testing!

This is where the Derived Test Requirements (DTR) come in - repeatable and testable conformance. If you gave this module to any one of the 23 labs, it will pass or not pass for all of them.

Each entry in a DTR has 3 parts: the assertion (a direct quote from the FIPS 140-2  standard), requirements against the vendor to prove it, and finally a set of requirements levied on the tester of the module (ie the lab).

When you go through Cryptographic Algorithm Validation, it will prove you are using an approved algorithm in its approved mode of operation.  It can be a longer process, as you have to write a test harness, but the interaction with CAVP tends to be faster.

There is a NIST CAVS tool - but only the labs can see/use this tool, not you as a vendor. The lab generates test samples and sends them to the customer. The customer uses the samples as input and generates the cryptographic output, the customer returns the results to the lab. The lab checks the restults and if they mach, submits the results to the CAVP for the certificate.  You should have your own harness that takes the arcane CAVS input to verify your modules quickly as you're doing you work.  There are sometimes small variations you need to do.

Important things to remember: encryption and decryption are tested separately. in Counter mode, the test is performed on ECB mode and then the counter design is examined separately. t must be shown that a particular count can only be used once with a key.

If you want to limit the testing and churn, keep your boundary very small so you can reuse it.

If you have non cryptographically relevant changes, you can do a change letter (1SUB) - those can be very quick through CMVP, but your lab will have to review your code diffs and do some regression tests (so not free).  A partial revalidation (30% or less of your module has changed) will require a partial retest.  If you've changed more than 30%, then a full revalidation is requited.

Correctly and optimally defining your crypto boundary is the single most important decision you can make. Make it as SMALL as possible - but you can't leave things out!

This post by Valerie Fenwick, syndicated from Security, Beers, Theater and Biking!

ICMC15: How Not To Do a FIPS 140 Project

Steve Weingart, Manager of Public Sector Certifications, Aruba Networks; Chris Keenan, Evaluator, Gossamer Security Solutions

Surprise! Not only will our speakers tell us how *not* to do FIPS, but as an added bonus, they will tell us the right way to do it.

There are a lot of ways to FIPS wrong  - this will make things harder and more expensive.

FIPS requirements are precise, the requirements are not soft - you can't just dive into a validation without planning.  No matter who you are or what your module is going to be, you will have to meet all of the requirements to get your validation. Yes, they are confusing, but they are still mandatory.

How can you read the standard? It will put you to sleep - true, but there is critical information in there.

"Crypto design really is rocket science" :-)

Where are your keys? Are they stored correctly? Are they deleted properly?

You simply cannot design and develop your product and do FIPS later. Once you're validated, you are set  - and you might find that the changes you make to pass validation will not work with older versions.  You can't add FIPS 140-2 like a coat of paint, it has to be designed with FIPS 140-2 in mind. [Note: You can do so... it's just painful and slow. :-)]

Why should you engage a lab before you start testing? They will help prevent you from making mistakes that everyone makes.  And you will have to show them your code and schematics.  Think of them as a partner.

Bigger is better, right? You want the highest security level, right?!?  Um... do you know what it means to do a formal model for your code?  So, maybe level 4 is not appropriate. Keep in mind that most software modules cannot get above level 1, with some getting level 2 with special considerations.

Why do you need to document things after showing the lab your code? :Will anyone even read it? Yes - the lab and CMVP will, and it needs to be availalbel to your users.

Best practice is to write the security policy in step with your architecture diagram - really designed for FIPS - but honestly, nobody does this (but if you did, you'd be happier).

This is not a fast process - the queue is 3-6 months and sometimes a year long. That doesn't count the time for procesing, etc.

It's not usually okay to use the /dev/random frm the OS. [Note: what if the OS's RNG is validated?]

If you choose a higher security level, then you should count on having the materials you're using scrutinized. Even if you have a "non FIPS mode" - your basic device may still need to meet these requirements.

Step one - get your CAVP certificates. That means you have to use approved algorithms.  Now, if you have proprietary algs you don't need certificates, but they also won't be included in your CMVP validation. [NOTE: designing your own algs? do you have rocket scientists on your team?]

Your security policy document describes how the product meets the requirements of the FIPS standard. There are test requirements around this.  No getting around this.

Are all the labs the same?  Well... they are all accredited, but they all have strengths and weaknesses and areas they excel at.  Some labs prefer to test at their site, some at yours. Some need specific types of evidence, others don't. Some focus on software, others on hardware. It's about finding the right fit for you and your product.

Have we mentioned the best thing to do is to start early?

Question: what about transitions?  You could do your work early, and be ready to go... then new "guidance" comes out. Now, having a relationship with your lab, you should get this information early.

Implementation guidance is a living document. When NIST finds that the document doesn't have enough clarity, or the world has changed, or corrects mistakes in the standard. 

It doesn't happen often, but there are sometimes cases of guidance being applied retroactively - a big pain for those in the queue. You need to have management buy in and resources until you have your certificates.

Question: If you're the OS and you get your entropy generation approved, why can't use the OS RNG?  Answer: actually you can, as long as it's used in a compliant way that meets requirements in the security policy. That's "FIPS inside".

Question: You noted that guidance doesn't add new requirements, but that's not true. We need new RNG, and entropy estimation (never required before). NIST: we sometimes have to do this, because of security concerns (like in the case of entropy). There's always been a requirement on showing where you're getting your random source, so now there's a requirement for more evidence (that is a new requirement). NIST may not have checked that before, but the requirement was there.

Question: what if you're using someone else's chips, how do we provide source code? Commercially available hardware is exempt from this requirement. Now, if there is algorithm implementation in the chip, and it hasn't been algorithm testing - you might have to do this.

Question: How do you do advanced planning when the IG changes? Yeah, that happens. We hope it doesn't, but it does.

Question: If I use the OS's RNG and it's validated, can I use that as a random seed? Yes, but... can we still do this with the latest news about sun setting older RNG validations. Also, HOW you are using it is important.


Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!