Thursday, November 19, 2015

OWL: Bias in the Workplace

Professor Joan C. Williams, Hastings Foundation Chair; Director, Center for WorkLife Law; University of California, Hastings College of the Law.

I had read Joan's book - What Works for Women at Work - and LOVED it, so I was so happy to see Oracle brought her onsite! Her book was well researched and science based, and gives lots of great everyday strategies for women in today's workplace.

She interviewed more than 100 successful women, and did additional research. Ninety-six percent of all women interviewed had experienced  at least one type of bias at work.

She wrote an article for Harvard Business Review, "Why We Hate Our Offices: And how to build a workspace that you can love".

The first type of bias is "Prove it Again!" syndrome, experienced by 68% of the business women and scientists she interviewed.

One gentleman she interviewed had transitioned from being a woman, and stayed in the same field of science. He overheard people, who were confused about who the female was with the same last name, that "his work s so much better than his sister's". He doesn't have a sister - that was his work, published when he was a man.

There is the "stolen idea" syndrome - you expect great ideas to come from men, so don't hear it when it comes from a woman.

Women's and men's mistakes are remembered differently, and many of these same biases also apply across racial lines. For example, people were asked to review a legal memo - the same memo, with the same errors - but when reviewers thought they were reviewing a document by a black man - they found more errors.

What's the most important factor in determining networks?  Similarity, attractiveness and location.

How does this play out in the workplace? In an org where people on top are a certain demographic, they are going to sponsor people who are like them (A sponsor is a mentor that is willing to spend their political capital to help their mentee's career).  Men tend to be judged on their potential, women on their results. This gets them stuck in the "prove it again" loop.

Women of color trigger two sets of negative stereotypes: gender and race.

When they interviewed scientists they were surprised that women of Asian descent reported the "prove it again" strategy more often then white women, even though we thought their was a stereotype that Asians were good at science. Apparently the women in this group were exempt from that positive stereotype.

So, how to get out of it? Well, prove it again - but try not to burn out. Keep careful, real-time records that track your accomplishments.  When you get compliments? Forward the email to your sponsor and manager.

How can manager's level the playing field for women?  Look around - who are you sponsoring. Is there a certain patter? do you need to widen out the group.  Male managers sometimes worry that taking a woman out to lunch or coffee would look "weird" - it doesn't. If you would do it with the men you were sponsoring, then it should be appropriate for women. Now, if you do a lot of your bonding in the men's locker room... then you need to think about doing this in ways where it wouldn't matter if you were sponsoring a man or a woman (mentoring, etc0.

Imagine you are sitting in a meeting and you see the stolen idea occur. How do you intervene? Lot's of ideas, but saying something is better than nothing. Something like, "thanks, Paul, for going back to that, I've been pondering that ever since Pam brought it up"

What if you are sitting in a meeting and you see men being judged on their potential; women on their performance - how would you intervene? This happens a lot when it comes to promotions, women are often already doing the job before they get the promotion.  One idea for getting around this: suggest evaluating the engineers first by their accomplishments, then by potential. Another: Are we being consistent here?  Or, now that we know what we're looking for, let's go back to the top of the pile and re-review everyone.

Often the most savvy way to call out bias is not to mention that's what you're doing :-)

What works for organizational prove-it again: set up precommitment to what is important (for promotion, for example), and when someone varies from there they need to have justification.

Women are expected to be nice and communal - and nice. Men are expected to competent and "agentic" (assertive, direct, competitive and ambitious).  Nobody thinks of a strong leader as "nice", so women often aren't even considered for leadership positions.

This varies by culture - US/Canada and UK believe a leader should be independent, risk takers, direct and focus on tasks. It's opposite in India/China/Japan: Interdependent, certainty, indirect and have a focus on relationships.

Ben Barres, the transgendered scientist, noted that "by far, the biggest difference is people treat me with respect. I'm interrupted less" since becoming a man.

How you stand and sit telegraphs power or submission. To demonstrate authority, stand with feet apart - stable.

Ellen Pao was described as both as "passive, too quiet at meetings" and "entitled, demanding".

Women get pressure to be deferential or play the office mom - always deliver, but never threaten. Women are pressured to do the office housework: planning parties, getting gifts, note taking, scheduling meetings, mentoring, and do the undervalued work (paperwork, etc).

But, if you're stern or say no - you're not modest or nice. You become the "B" word.

As a women, you need to claim your seat at the table and practice power poses. You need to learn how to get a word in edgewise - learn how to politely interrupt: "Oh, I'm sorry, I thought you were done."

What works for one woman won't work for another. You need to be authentic, and no when to say no (and how!).

Managers - how to handle "office housework"?  Don't ask for volunteers - women will be  under gender stereotype to volunteer. Assign true admin and housework to admins or true support personnel.  Possibly, for things like minutes, do a rotation.

Spread the load and set norms. For example, everyone does one "citizenship task" - sitting on committees, and everyone does their own ordering, billing, etc.

There was a non peer reviewed study on performance evaluations. Men got specific feedback. Women got things like "bossy, abrasive, strident, aggressive, emotional, irrational" - very strong prescriptive gender bias.

Women also are impacted by maternity bias - mothers are 79% less likely to be hired, are held to higher standards for punctuality, offered lower starting salaries and promoted less.

Indisputably competent and committed mothers are seen as LESS likable, particularly by women.

Mothers are not offered stretch positions because people assume she's busy with kids. Managers: don't assume! If she's the best person, offer it - and let her know that similar positions will be available in the future if now is not a good time.

There is this false sense that only one woman can get promoted, get an award, etc - so they will compete with each other, instead of working together.

A study found that not one female legal secretary expressed preference to work with a female lawyer as a boss.  "Females are harder on their female assistants, more detail oriented, and they have to try harder to prove themselves, so they put that on you."

Older women in the workplace discourage younger mothers from working part time after maternity leave, because "I worked full time right away and my kids are fine". That is, "I did it the hard way, why can't you."

One study found that women without children work more unpaid hours of overtime than anyone else in the workplace, seen as a pathetic spinster - so why can't you work these hours?

To get organizational change: do a "4 patterns assessment" - is bias playing out in everyday work interactions? Then develop an objective metric to test whether what women think is happening is, and make adjustments.

Example: given the study of performance reviews, companies should be reviewing their reviews for this language and see if it's mentioning bias based negative personality traits? Look at objective metric: promotion rates.  Interrupt: have someone trained to spot bias read all performance evaluations (or use an app), and redesign evaluations and provide workshops for your managers.

We all have unconscious bias. This is usually not malicious. But if you are ignorant, who's fault is that? We need to be aware and take action to counteract it.

Friday, November 6, 2015

ICMC15: Importance of Open Source to the Cryptographic Module Community

Chris Brych, Senior Principal Security Analyst, Oracle

Chris's talk will focus primarily on OpenSSL's history and potential future

The OpenSSL project has it's roots back to 1995 to the SSLeay project (Tim Hudson and Eric Young). In 1998, Tim and Eric joined RSA and started a new life and career in crypto development. There were still a lot of followers, Ben Laurie and Dr. Stephen Henson took over maintenance and changed the name to OpenSSL. In January 1999, 0.9.1 was released. Mostly maintained by volunteers. Today, we're at 1.0.2d.

There are 15 people around he world that work on OpenSSL, most are volunteers. There are 2 full time paid employees with plans to hire 2 m ore. No direct source of funding, they are relying on some private contributions and some donations.  Considering there are 7,098,576 web servers use OpenSSL (0.9.7 -> 1.0.2). This does not include application uses of OpenSSL, either, like routers.

In 2003 Steve Marquess, working for US DoD, embarked on a FIPS 140 project to validate a cryptographic toolkit derived from an OpenSSL distribution to get around exorbitant licensing fees from 3rd party toolkit vendors.  Needed a more cost effective solution.

There was a fork created in OpenSSL distribution to isolate the FIPS crypto into it's own distribution. There is a build process that is repeatable to build the exact same module that has already been validated. The hash was calculated over the fipscanister.o and verifying it with a hash published in the FIPS security Policy Document guaranteed that the same code that was validated could be used to build the FOM. Contiguous boundary created in memory that allowed for integrity checking.

This was based on the 0.9.7e distribution.  But, the DoD still wanted to make some changes to make it eve more secure.

The community saw value in someone else doing this work and started picking this up and then making contributions upstream to make it more secure.

Why use this? Well - it's free!  (subject to license restrictions).  If you're a small company, you can't afford the cost of 3rd party toolkits with FIPS support.

There's also low internal maintenance, because OpenSSL does it for you. Engineering can focus on new development features, new updates can be picked up and easily integrated and build on existing code base.

Since this was so widely accepted by many companies, many vendors stopped making their own crypto libraries. Many people, beyond the 15 core volunteers, contribute fixes.

There are disadvantages....

For example, if a bug has been publicly identified, it is out of your control - you have to wait for OpenSSL to create a fix.  You may need to wait on OpenSSL to re-certify itself, slowing down your business.  OpenSSL is not for everybody - the 80/20 rule applies here.

0.9.8 and 1.0.0 OpenSSL distributions will be EOL on December 31, 2015.  The FIPS Object Module version 1.2.4 (Cert #1051) will no longer be supported.

1.0.1 is EOL December 31, 2016 - so start thinking abou going to 1.0.2. FIPS Object Module cert #1747 is still supported by 1.0.2

But, 1.0.2 will only be around until December 31, 2019. Seems a long way out, but it will be here before we know  it. Then, we won't have anything to use #1747 with anymore.

NIST just published a draft of SPF 800-131A, r1 in July 2015, speifies non-compliant DH, ECDH and RSA in 2018.  the FIPS Object Module is not fully compliant with NIST SP 800-56A/B key agreemen. So only primitives testing for ECDH completed, KDF not tested. DH not tested as part of NIST SP 800-56A and RSA not compliant with SP 800-56B and OpenSSL does not plan to do this development.

1.1.0 is plan to be released in April/May 2016, which will include an API change. It will not be compatible with FOM 2.0.10.  That is, we lose our FIPS module!

This means that if organizations need FIPS and are now relying on OpenSSL, they will have to hire to complete this. Will have to make modifications of a code base containing over 500,000 lines of code - risky. What if they introduce other vulnerabilities?  Many vendors will have an OpenSSL validation - but they won't be the same. Confusing for end users.

Money won't solve the problem. OpenSSL is not interested, even if some vendor offered them money.

How can we solve this? if you're interested, please participate in the CMUF. We will have to get input from CMVP and the US Government needs to be involved.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: Collateral Damage—Vendor and Customer Impact of Frequent Policy Changes

Joshua Brickman, Director, Security Evaluations, Oracle; Glenn Brunette, Senior Director and Chief Technologist, Cybersecurity, Oracle

Glenn started the talk.

Oracle is a big company, nearly 40 years old. Originally started with one products, but no has 1,000s of products with 400,000 customers across 145 countries.  This is a lot of variance, which makes doing FIPS 140-2 at Oracle a challenge.

Many of our products and delivered systems cannot change frequently, for example products used in medical trials.

Oracle has developed many products for FIPS 140: Solaris Cryptographic Framework, StorageTek, Java Cards, Acme Packet Session Border Controller, etc.  Also leverage third-party validated modules ("FIPS Inside") like RSA BSAFE Crypto J and OpenSSL.

It's not even simple when you talk about one product. For example, our Database interacts with several different crypto modules.

We do our best to educate our customers and internal developers, try to figure out how to compare apples and oranges and figure out operational decisions.

FIPS 140 can be easy to understand in one breath, and complicated to understand in the second. Is it approved or not?  Validated vs approved?  What modes and key lengths are you validated against?

We need to be more specific.  Marketing and product documentation might say: "We're FIPS validated! We're Encrypted" - but our customers want and need more details. Even saying a product uses AES may not be sufficient, as they need to know modes supported.

Module vs product validation.  Is "Oracle Weblogic" FIPS 140 validated?  What about Oracle Solaris? It could have multiple libraries and modules under the covers.

But, if you use "FIPS Inside" approach, you won't be listed on the CMVP website, so customer doubts vendor's sincerity.  Would be nice  if there was an official place to list folks leveraging FIPS Inside strategy.

Module versioning is not consistent with product versioning.  You can get away with very vague versioning schemes, makes it challenging for customers.

How do you map a product's cryptography to modules?  The big product, like an OS, may not validate every cryptographic module included in the system. Even if the OS vendor works really hard, they might only get 80%. How does your customer know if they are using one of the non validated uses cases?

Many times products do not ship with FIPS 140 on by default, so there is a challenge to even get it turned on. Then, customers layer products - particularly in the cloud space.

The NIST website can be very clear about when you should use FIPS, yet customers don't interpret this correctly. They may only be concerned with TLS, but actually using more.  They've sometimes done the horrible workaround to drop crypto, because they don't have a validated solution.  Some customers get this, others do not.

Lack of understanding at the customer site leads to a lot of problems. We need to plan for this when making future standards, so folks aren't taking the shortest path (which is usually the wrong approach).

Josh thinks there's a lot of hope, he sees progress coming around the corner.

It's clear that NIST/CMVP want the strongest crypto NOW for their customers. Vendors want the strongest crypto for their customers, with the least performance impact. Customers WANT this, but for a low price.

Remember the critical Shampoo algorithm: lather, rinse, repeat! This should apply to the CMVP process.

Vendors face big challenges, like: AES-GCM IV Generation.  In 2007 NIST issued SP 800-38D published, with two IG's.  In 2009 IG A.5 Key/IV pair uniqueness requirements  inside the physical boundary. In 2015 IG A.5 was overhauled, based on the discovery of 1 vendor's bad application that used all zeros for their IV - so we're all punished. Change came in August 2015, no grandfathering.

When we saw the new IG draft, we stopped all ongoing work to analyze the new IG. Oracle wrote up a proposal to mitigate the impact of the new IG while still meeting the spirit of the IG and sent it to CMVP.  Whiel waiting for a response, CMVP issued another IG. On August 5, still not responding to Oracle's comments, A.5 came out. How is that working with the vendors?

IG 7.15 published in August, again no grandfathering, and we found a project that got it's entropy from a 3rd party and we could not prove it.  the third party treated this as intellectual property, so they did not want to work with CMVP.  Oracle volunteered everything we did know to CMVP, and begged for a waiver. How do you build a business around that model?  Did everyone get a waiver?  Will our next project get a waiver?

How do we get out of that cycle?

Could form a technical community (CMUF working with CMVP). Take a page from NIAP and work together to solve problems. Take advantage of the fast resources of industry!  Create one for IGs, one for FIPS 140-4, etc.  Instead of throwing IG's over the wall, work together to come up with consensus.

We need time  to react - there MUST be time to transition. Every reactive response to IG is less time for industry to build product and fix bugs. [Note: by panicking to react, we may end up introducing worse issues.]

NIAP and NIST need to go back to being a partnership - see Entropy!

Negotiate with other crypto schemes to see if any mutual recognition can be negotiated.

We can do better by working together, so let's do that :-)

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: FIPS is FIPS, Real World is Real World and Never the Twain Shall Meet?

Ashit Vora, Co-Founder and Laboratory Director, Acumen Security

Ashit's journey has taken him from a Lab, then to a Vendor, and now back to a Lab.  He's seen the experience from both sides.  He's been working with FIPS 140-2 since 2003. The world has changed greatly since then, but the standard has not.

FIPS 140-2 has been approved since May 2001. It was very hardware focused, as that's what was common in the marketplace. There was only a small handful of approved algorithms.

There is a requirement that the standard is supposed to be updated every 5 years, so there was a draft of FIPS 140-3 in September 2005... a draft.  As of now, we *might* get FIPS 140-3 in May 2016, if it's delayed even a little bit it will turn into a long delay, as the standard has to be signed off by the Secretary of Department of Commerce - a political appointee, which will change when there is a new US President.

FIPS 140-2 is an open commercial project. The standard itself is 69 pages. Many vendors read that only and say, "I've read and understand the standard and we're ready!"  There are 127 pages of Derived Test Requirements, and 63 pages of implementation guidance.  IG was originally supposed to be an FAQ, but it has evolved to really be the new rules.

There are 287 requirements for FIPS, 50% of them are documentation requirements. More than 50% of the requirements are orthogonal to crypto.

OpenSSL has had 5 vulns in 2013, 22 in 2014 and 26 in 2015, but how many times has their FIPS module changed since 2013? Zero.

CMVP believes FIPS is a solutions, vendors think it's a means to an end (to sell to the government), and to the federal agencies it is merely a checkbox.

What is it really? A bit of everything and a bit of nothing. Ensures what is claimed has been implemented correctly - that's a good thing! At levels 1 and 2 little more assurance than the product implements crypto as per spec. FIPS validation does not mean the overall "cryptographic posture" of the system is secure.

The standard was developed with the best intentions, but it has not been able to change with our fast changing industry.  There have been a few improvements, like the changes for CRNG relaxation and possible relaxation on integrity checks.

COTS product - easy to buy a produce and open at your own leisure. Causes vendors to downgrade to level 1 or design prupose built "opacity shields". Tamper labels in a similar vein add nothing to security posture of the product. Products are rarely deployed with the opacity shields and tamper labels in place, they just don't fit on the rack. Plus, end users need to make modifications.

Real world: Key Management - possible to have a module FIPS validated w/out including key management at all. In fact MOST software libraries do not include key management.  This is a direct result of crypto boundaries shrinking!

Password requirements are rudimentary at best. No consideration for password complexity, frequency, etc.

Real World: OpenSSL. It is the MOST widely used cryptographic library in the world.  ost prevalent in networking products, but also showing up in IoT, etc. Many products gain FIPS compliance just by including OpenSSL.  That validation, however, does not cover TLS, key management - etc. Hence, the small boundary.

FCC/FSM and configuration management requirements do NOT add security.  The FSM (Finite State Machine) is almost always created AFTER the product is completed.

We have a test on software firmware load test, it checks that it's valid, but there is no root of trust.  A self signed cert gets you through this check.

Should follow the CC direction: use the standard as a base/toolkit and provide technology specific requirements (that map back to the standard).

Do not tie validations to specific versions. Allow for minor changes/bug fixes. Validations take a long time (12-18 months), but vendors tend to push fixes every month. Often very minor. Yes, there may be that one corner case where one vendor screws this up and makes a critical change - but why punish all vendors for one person's mistake?

Due to this, vendors are making VERY small boundaries, which means we're actually looking at a tiny portion of the security relevant system. Need to encourage larger boundaries, but that's only possible if engineers can fix small bugs.

Spend time on the most important sections: Section 1, Section 5 and Section 7.

Focus on key life cycle! Make those requirements more all-encompassing. Implementing crypto algorithms is easy (?!?), but key management is hard. [NOTE: it's all hard :-)]

Question what about making physical security optional?  Maybe, but it should be tied to the expected deployment environment. For example, a publicly accessible wifi access point might need to be more strict than a router buried in the bowels of a government building.

Randy Easter recommended just doing Level 1 for physical security, but Ashit noted that if they do everything else to Level 2 they will end up with a Level 1 overall validation. Randy noted that this is a matter of informing end customers - easier said than done!

Randy also wants to know how CMVP has relaxed rules w/out notifying the Secretary of Commerce?  Ashit noted that "that ship has sailed" with the Implementation Guidance.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

ICMC15: Commonly Accepted Keys and CSPs Initiative

Ryan Thomas, FIPS 140-2 Program Manager, CGI Global Labs

Vendors understand their products and security, but are mystefied with how to list keys/CSPs in the way that the CMVP want.

The labs understand the requirements, and review any different types of vendor submissions that meet the requirements - and really want to provide something that is consistent and easy to review by CMVP.

Ryan has the idea to make life simpler, by creating a shared template for Keys and CSPs table.  He's gotten feedback already from CMVP.

Vendors will benefit from a pre-populated list that they can customize to your implementation - saves time and effort!

This isn't perfectly straight forward - will vary by software vs hardware modules.  Does it have persistent or ephemeral keys?  If it's software, likely are not making key storage claims.

Picture of template.....

Column #1 - Key/CSP Name. Mapped to NIST SP, ISO or IETF RFC - ie normalized names, a consistent and clear name that means the same thing to everyone.

Column #2 - Key/CSP type - type of key, algorithm and size.

Column #3 - Generation Input. Explain how the keys are derived or generated. If it's entered, specify if the CSP is entered electronically or manually/encrypted or plain text.

Column #4 - Output - encrypted or plain text

Column #5 - Storage. is it stored in memory, flash, HD, etc. ( encrypted or not). (there seemed to be 2 #4 columns,  so my numbering is off).

Column #6 - Use. how is the key used during module operation?  Needs to be mapped directly to an approved service that the crypto module performs.

Column #7 - Zeroization. How will the CSP be zeroized?  All possible techniquest need to be listed.

As for case studies, Ryan started with a "simple" IST SP 800-90A hash based DRBG.  Still found some inconsistencies.

More difficult are network protocols, like SSH, SNMP, IPsec and IKE - IKE is particularly tough.

TLS has a lot of potential rows, so want to work with the CMVP on this one as well, before showing to the public.

This will get posted to the CMUF (Cryptographic Module User Foundation?) site, contact Matt Keller if you need access.

Jennifer Cawthra, CMVP Program Manager, NIST - notes that this would really be appreciated, as it will help speed up their reviews, etc.  They are also looking for a security policy template - which the CMUF is also thinking of picking up.

From a vendor perspective, this looks pretty cool.  I think by having a standard template, we could then freely discuss with other vendors and not have to worry about disclosing some proprietary format, there could be websites to help walk us through, etc.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

Thursday, November 5, 2015

ICMC15: CMVP Programmatic Status

Carolyn French, ITS Engineer, CSE; Michael Cooper, IT Specialist, NIST; Apostol Vassilev, Cybersecurity Expert, Computer Security Division, NIST

NIST will be increasing he fees for one year, cost recovery fees, so that they can hire contractors to work through the backlog.

In August 2015, NIST published a Federal Registry Notice seeking public comment on he potential use of certain ISO/IEC standards and crypto module testing, conformance and validation activities.  They did not get much feedback, other than complaints that this was a pay based set of standards.

Question on contractor strategy: we are now getting unusual questions and our labs have to spend time educating the contractors. Request for satisfaction with this strategy. Michael agreed it was  fair point, but that they have smart people.

Note on why they did not receive many comments: questioner said he's on the NIST public list, but the call for comments did not go there, it was only published on the federal register.  NIST noted it was posted to a few places, and asked their labs to reach out.

Question: while it looks like we're moving towards ISO standard, what would happen if we don't get it approved by Secretary of Commerce? Answer: you'd have to wait a loooong time for us to generate something else.

When the new standard is signed, folks will have 12 months to submit under the old standard.

CMVP queue is down from 12 months to 3 months, they are trying to get this down further. Part of the current delay waiting for the cost recovery funds to come in. This can be slow due to large companies' PO process.  Labs can speed this up by requesting the funding from their clients sooner - or possibly NIST needs to bill sooner?  For example, NIST should not wait until the test report is received, but rather send the bill when they are notified that there is a system under test - or some other time before the test report is submitted.

Post by Valerie Fenwick,  syndicated from Security, Beer, Theater and Biking!

ICMC15: SP800-90B: Analysis of Linux /dev/random

Stephan Mueller, Principal Consultant and Evaluator, atsec information security

Not only does Mr. Mueller work at atsec, but he also is a Linux kernel developer in his spare time. 

How do we define the noise sources for /dev/random:
  • Disk I/O
    • Block device number + 0x100 ||Jiffies || High-Resolution Timer
    • Noise derived from access times to spinning platters
    •  Human Interface Devices (HID)
    •  Interrupts
      • fast_pool: 4 32-bit words filled with 
        • Jiffies 
        • high resolution timer
        • IRQ number
        •  64 bit Instruction pointer of the CPU.
    The output function is a SHA1 hash run over the output and put in the output pool.

    How do we condition? The conditioner mixes data into input-pool, a non-approved mechanism according to SP800-90B. LFSR for 128 * 32-bit pool with full rank polynomial.  The goal is for bias reduction. The Disk/HID: bias added due to structure of data to be mixed into input_ool with MSB with rather static data (event numbers), middle bits with low entropy data (jiffies) and LSB with high-entropy data (high resolution timer.

    There is a health test, but it is not compliant with SP800-90B.  they measure disk/HID and derivation of event time in jiffies. Minimum of all derivation is estimated entropy of event.  Limitation to 11 bits maximum per event. There is a claim that one bit of input data has less than one bit of entropy.  So, this covers continuous health test requirement and repetition count requirement (even though, again, it's not the test SP800-90B actually asks for).

    All of these derivations are applied to all events, including startup - so, we kind of have a start up test.

    Interrupts: implicit, without interrupt handling there is no live Linux kernel :-) 

    There is no adaptive proportion test - entropy measurement considered equivalent.

    The goal of identifying failures is met.

    How to observe noise sources and LFSR/input_pool in a live Linux kernel, and unchanged Linux kernel and not ipact entropy calculation - answer: SystemTap! Though Heisenberg's uncertainty still applies, as it's changing timing by adding additional code paths. Akin to executing Linux on a slower system, therefor no impact on entropy estimates. [Note: I'm not sure I agree with that theory.]

    Their test results:
    General: Chi-Square for IID cannot be calculated and Chi-Square Goodness Fit cannot be calculated. For Disk/HID, only high-resolution timer tested due to main entropy source - other items were zero or close to zero entropy. But, when using lower 11 bits of timer, data is almost IID.  Looking at interrupts, each 32-bit word is tested individually. About half of IID test fail -> non-IID. 

    General conditioner of IID: collision tests N/A due to no collisions, and all sanity checks always pass. 

    Is the /dev/random on Linux good enough? Noise sources: using the min entorpy values for events and considering the maximum of 11 bits of entropy estimated per event, we pass SP800-90B. Additonal testing beyond SP800-90B shows that more than 6% of events are estimated to have 0 bits of entropy. About 20% of events are estimated to have 1 bit of entropy. Massive underestimation of entropy in noise sources.  For the conditioner: no change from noise sources. With entropy estimator enforced when obtaining random values - preventing random number output when there is too little noise. The passing of the noise source implies passing of the entire random number generator. Other pools feeding /dev/random and /dev/urandom are DRNGs!

    Assumption of a certain structure of an entopy source. Straigh data path from noise source to output, and it's difficult to apply to noise sources maintaining an entropy pool

    Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!