Seeing the Big Picture

by adam on December 12, 2016

This quote from Bob Iger, head of Disney, is quite interesting for his perspective as a leader of a big company:

There is a human side to it that I try to apply and consider. [But] the harder thing is to balance with the reality that not everything is perfect. In the normal course of running a company this big, you’re going to see, every day, things that are not as great as you would have hoped or wanted them to be. You have to figure out how to absorb that without losing your sense of optimism, which is part of leadership — without losing faith, without wanting to go under the covers and not come out, without being down or angry to a counterproductive level, and without demanding something of people that is unfair, inhuman, impossible. (“Bob Iger on Shanghai Disney, Parting With His Chosen Successor, and His Pursuit of Perfection“, Variety)

Note that he’s not saying ignore the problems; he’s not saying don’t get angry; he’s not saying don’t demand improvement. He’s saying don’t get so angry that it’s counterproductive. He’s saying be demanding, but be demanding in a fair way. He’s also saying that you can remain optimistic in the face of problems.

There’s lessons here for security professionals.

Do Games Teach Security?

by adam on December 8, 2016

There’s a new paper from Mark Thompson and Hassan Takabi of the University of North Texas. The title captures the question:
Effectiveness Of Using Card Games To Teach Threat Modeling For Secure Web Application Developments

Gamification of classroom assignments and online tools has grown significantly in recent years. There have been a number of card games designed for teaching various cybersecurity concepts. However, effectiveness of these card games is unknown for the most part and there is no study on evaluating their effectiveness. In this paper, we evaluate effectiveness of one such game, namely the OWASP Cornucopia card game which is designed to assist software development teams identify security requirements in Agile, conventional and formal development
processes. We performed an experiment where sections of graduate students and undergraduate students in a security related course at our university were split into two groups, one of which played the Cornucopia card game, and one of which did not. Quizzes were administered both before and after the activity, and a survey was taken to measure student attitudes toward the exercise. The results show that while students found the activity useful and would like to see this activity and more similar exercises integrated into the classroom, the game was not easy to understand. We need to spend enough time to familiarize the students with the game and prepare them for the exercises using the game to get the best results.

I’m very glad to see games like Cornucopia evaluated. If we’re going to push the use of Cornucopia (or Elevation of Privilege) for teaching, then we ought to be thinking about how well they work in comparison to other techniques. We have anecdotes, but to improve, we must test and measure.

Incentives, Insurance and Root Cause

by adam on December 2, 2016

Over the decade or so since The New School book came out, there’s been a sea change in how we talk about breaches, and how we talk about those who got breached. We agree that understanding what’s going wrong should be a bigger part of how we learn. I’m pleased to have played some part in that movement.

As I consider where we are today, a question that we can’t answer sufficiently is “what’s in it for me?” “Why should I spend time on this?” The benefits may take too long to appear. And so we should ask what we could do about that. In that context, I am very excited to see a proposal from Rob Knake on “Creating a Federally Sponsored Cyber Insurance Program.”

He suggests that a full root cause analysis would be a condition of Federal insurance backstop:

The federally backstopped cyber insurance program should mandate that companies allow full breach investigations, which include on-site gathering of data on why the attack succeeded, to help other companies prevent similar attacks. This function would be similar to that performed by the National Transportation Safety Board (NTSB) for aviation incidents. When an incident occurs, the NTSB establishes the facts of the incident and makes recommendations to prevent similar incidents from occurring. Although regulators typically establish new requirements upon the basis of NTSB recommendations, most air carriers implement recommendations on a voluntary basis. Such a virtuous cycle could happen in cybersecurity if companies covered by a federal cyber insurance program had their incidents investigated by a new NTSB-like entity, which could be run by the private sector and funded by insurance companies.

Threat Modeling the PASTA Way

by adam on November 30, 2016

There’s a really interesting podcast with Robert Hurlbut Chris Romeo and Tony UcedaVelez on the PASTA approach to threat modeling. The whole podcast is interesting, especially hearing Chris and Tony discuss how an organization went from STRIDE to CAPEC and back again.

There’s a section where they discuss the idea of “think like an attacker,” and Chris brings up some of what I’ve written (“‘Think Like an Attacker’ is an opt-in mistake.”) I think that both Chris and Tony make excellent points, and I want to add some nuance around the frame. I don’t think the opposite of “think like an attacker” is “use a checklist,” I think it’s “reason by analogy to find threats” or “use a structured approach to finding threats.” Reasoning by analogy is, admittedly, hard for a variety of reasons, which I’ll leave aside for now. But reasoning by analogy requires that you have a group of abstracted threats, and that you consider ‘how does this threat apply to my system?’ You can use a structured approach such as STRIDE or CAPEC or an attack tree, or even an unstructured, unbounded set of threats (we call this brainstorming.) That differs from good checklists in that the items in a good checklist have clear yes or no answers. For more on my perspective on checklists, take a look at my review of Gawande’s Checklist Manifesto.

Tony’s book is “Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis

Learning from Our Experience, Part Z

by adam on November 7, 2016

One of the themes of The New School of Information Security is how other fields learn from their experiences, and how information security’s culture of hiding our incidents prevents us from learning.

Zombie survival guide

Today I found yet another field where they are looking to learn from previous incidents and mistakes: zombies. From “The Zombie Survival Guide: Recorded Attacks:”

Organize before they rise!

Scripted by the world’s leading zombie authority, Max Brooks, Recorded Attacks reveals how other eras and cultures have dealt with–and survived– the ancient viral plague. By immersing ourselves in past horror we may yet prevail over the coming outbreak in our time.

Of course, we don’t need to imagine learning from our mistakes. Plenty of fields do it, and so don’t shamble around like zombies.

The Breach Response Market Is Broken (and what could be done)

by adam on October 12, 2016

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Secure Development or Backdoors: Pick One

by adam on October 4, 2016

In “Threat Modeling Crypto Back Doors,” I wrote:

In the same vein, the requests and implementations for such back-doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to bypass other security controls, any weaknesses are likely to have outsized impact.

It sounds like exactly what I predicted has occurred. As Joseph Menn reports in “Yahoo secretly scanned customer emails for U.S. intelligence:”

When Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users’ security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.

(I should add that I did not see anything like this at Microsoft, but had thought about how it might have unfolded as I wrote what I wrote in the book excerpt above.)


Crypto back doors are a bad idea, and we cannot implement them without breaking the security of the internet.

You say noise, I say data

by adam on September 20, 2016

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Why Don’t We Have an Incident Repository?

by adam on September 14, 2016

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

What Boards Want in Security Reporting

by adam on August 22, 2016

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)