Analyzing The Army’s Accidental Test

by adam on April 3, 2013

According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”


Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)

With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.

So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.

Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”

However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.

Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.


Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.

It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.


[/update]

One comment

The problem here is the Army doesn’t get to choose on many of these controls, especially the OPM or DOD mandates. While FISMA does give some authority to the Agency Heads (and that Sec Def, not Army Chief), it is not carte blanche. For example, OPM via NARA mandates all CUI data (PII/BUI/FOUO/etc) be protected with FDE using a FIPS 140-2 approved solution on mobile devices. While the Agency Head can deviate from this, it requires notification and publication in the Federal Register which hasn’t been done.

The real problem here is that FIP199, FIPS200, and NIST SP800-37 all establish a process to do exactly what you are suggesting and the Army isn’t making use of it. I agree experiment, metrics, etc are good BUT not via shooting from the hip. Set your test up correctly (even in it’s in production), formally accept the risk, and dot all your i’s, cross your t’s.

PS: It’s a bunk evaluation anyways as the other DOD CC/S/A’s are doing it any better.

by Peter on April 5, 2013 at 11:55 pm. Reply #

Leave your comment

Not published.

If you have one.