Do Games Teach Security?

by adam on December 8, 2016

There’s a new paper from Mark Thompson and Hassan Takabi of the University of North Texas. The title captures the question:
Effectiveness Of Using Card Games To Teach Threat Modeling For Secure Web Application Developments

Gamification of classroom assignments and online tools has grown significantly in recent years. There have been a number of card games designed for teaching various cybersecurity concepts. However, effectiveness of these card games is unknown for the most part and there is no study on evaluating their effectiveness. In this paper, we evaluate effectiveness of one such game, namely the OWASP Cornucopia card game which is designed to assist software development teams identify security requirements in Agile, conventional and formal development
processes. We performed an experiment where sections of graduate students and undergraduate students in a security related course at our university were split into two groups, one of which played the Cornucopia card game, and one of which did not. Quizzes were administered both before and after the activity, and a survey was taken to measure student attitudes toward the exercise. The results show that while students found the activity useful and would like to see this activity and more similar exercises integrated into the classroom, the game was not easy to understand. We need to spend enough time to familiarize the students with the game and prepare them for the exercises using the game to get the best results.

I’m very glad to see games like Cornucopia evaluated. If we’re going to push the use of Cornucopia (or Elevation of Privilege) for teaching, then we ought to be thinking about how well they work in comparison to other techniques. We have anecdotes, but to improve, we must test and measure.

Incentives, Insurance and Root Cause

by adam on December 2, 2016

Over the decade or so since The New School book came out, there’s been a sea change in how we talk about breaches, and how we talk about those who got breached. We agree that understanding what’s going wrong should be a bigger part of how we learn. I’m pleased to have played some part in that movement.

As I consider where we are today, a question that we can’t answer sufficiently is “what’s in it for me?” “Why should I spend time on this?” The benefits may take too long to appear. And so we should ask what we could do about that. In that context, I am very excited to see a proposal from Rob Knake on “Creating a Federally Sponsored Cyber Insurance Program.”

He suggests that a full root cause analysis would be a condition of Federal insurance backstop:

The federally backstopped cyber insurance program should mandate that companies allow full breach investigations, which include on-site gathering of data on why the attack succeeded, to help other companies prevent similar attacks. This function would be similar to that performed by the National Transportation Safety Board (NTSB) for aviation incidents. When an incident occurs, the NTSB establishes the facts of the incident and makes recommendations to prevent similar incidents from occurring. Although regulators typically establish new requirements upon the basis of NTSB recommendations, most air carriers implement recommendations on a voluntary basis. Such a virtuous cycle could happen in cybersecurity if companies covered by a federal cyber insurance program had their incidents investigated by a new NTSB-like entity, which could be run by the private sector and funded by insurance companies.

Threat Modeling the PASTA Way

by adam on November 30, 2016

There’s a really interesting podcast with Robert Hurlbut Chris Romeo and Tony UcedaVelez on the PASTA approach to threat modeling. The whole podcast is interesting, especially hearing Chris and Tony discuss how an organization went from STRIDE to CAPEC and back again.

There’s a section where they discuss the idea of “think like an attacker,” and Chris brings up some of what I’ve written (“‘Think Like an Attacker’ is an opt-in mistake.”) I think that both Chris and Tony make excellent points, and I want to add some nuance around the frame. I don’t think the opposite of “think like an attacker” is “use a checklist,” I think it’s “reason by analogy to find threats” or “use a structured approach to finding threats.” Reasoning by analogy is, admittedly, hard for a variety of reasons, which I’ll leave aside for now. But reasoning by analogy requires that you have a group of abstracted threats, and that you consider ‘how does this threat apply to my system?’ You can use a structured approach such as STRIDE or CAPEC or an attack tree, or even an unstructured, unbounded set of threats (we call this brainstorming.) That differs from good checklists in that the items in a good checklist have clear yes or no answers. For more on my perspective on checklists, take a look at my review of Gawande’s Checklist Manifesto.

Tony’s book is “Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis

Learning from Our Experience, Part Z

by adam on November 7, 2016

One of the themes of The New School of Information Security is how other fields learn from their experiences, and how information security’s culture of hiding our incidents prevents us from learning.

Zombie survival guide

Today I found yet another field where they are looking to learn from previous incidents and mistakes: zombies. From “The Zombie Survival Guide: Recorded Attacks:”

Organize before they rise!

Scripted by the world’s leading zombie authority, Max Brooks, Recorded Attacks reveals how other eras and cultures have dealt with–and survived– the ancient viral plague. By immersing ourselves in past horror we may yet prevail over the coming outbreak in our time.

Of course, we don’t need to imagine learning from our mistakes. Plenty of fields do it, and so don’t shamble around like zombies.

The Breach Response Market Is Broken (and what could be done)

by adam on October 12, 2016

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Secure Development or Backdoors: Pick One

by adam on October 4, 2016

In “Threat Modeling Crypto Back Doors,” I wrote:

In the same vein, the requests and implementations for such back-doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to bypass other security controls, any weaknesses are likely to have outsized impact.

It sounds like exactly what I predicted has occurred. As Joseph Menn reports in “Yahoo secretly scanned customer emails for U.S. intelligence:”

When Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users’ security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.

(I should add that I did not see anything like this at Microsoft, but had thought about how it might have unfolded as I wrote what I wrote in the book excerpt above.)


Crypto back doors are a bad idea, and we cannot implement them without breaking the security of the internet.

You say noise, I say data

by adam on September 20, 2016

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Why Don’t We Have an Incident Repository?

by adam on September 14, 2016

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

What Boards Want in Security Reporting

by adam on August 22, 2016

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)

FBI says their warnings were ignored

by adam on August 17, 2016

There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.

The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:

In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.

When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
[…]
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.

Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”

Shockingly, their warning did not enable the DNC to find anything.

When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.

There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:

  • Network monitoring
  • Kill chain-driven forensics
  • Agents working at the attacker
  • “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*

*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)

Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?

No.

Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.

We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.

We are not having a policy discussion about these tradeoffs, and that’s a shame.

Here are some questions that we can think about:

  • Is the model presented above of how attacks are detected reasonable?
  • Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
  • What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
  • How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?

That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”

[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]