The Breach Response Market Is Broken (and what could be done)

by adam on October 12, 2016

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

Secure Development or Backdoors: Pick One

by adam on October 4, 2016

In “Threat Modeling Crypto Back Doors,” I wrote:

In the same vein, the requests and implementations for such back-doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to bypass other security controls, any weaknesses are likely to have outsized impact.

It sounds like exactly what I predicted has occurred. As Joseph Menn reports in “Yahoo secretly scanned customer emails for U.S. intelligence:”

When Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users’ security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.

(I should add that I did not see anything like this at Microsoft, but had thought about how it might have unfolded as I wrote what I wrote in the book excerpt above.)

Crypto back doors are a bad idea, and we cannot implement them without breaking the security of the internet.

You say noise, I say data

by adam on September 20, 2016

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Why Don’t We Have an Incident Repository?

by adam on September 14, 2016

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

What Boards Want in Security Reporting

by adam on August 22, 2016

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)

FBI says their warnings were ignored

by adam on August 17, 2016

There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.

The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:

In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.

When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.

Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”

Shockingly, their warning did not enable the DNC to find anything.

When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.

There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:

  • Network monitoring
  • Kill chain-driven forensics
  • Agents working at the attacker
  • “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*

*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)

Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?


Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.

We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.

We are not having a policy discussion about these tradeoffs, and that’s a shame.

Here are some questions that we can think about:

  • Is the model presented above of how attacks are detected reasonable?
  • Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
  • What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
  • How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?

That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”

[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]

Consultants Say Their Cyber Warnings Were Ignored

by adam on August 3, 2016

Back in October, 2014, I discussed a pattern of “Employees Say Company Left Data Vulnerable,” and its a pattern that we’ve seen often since. Today, I want to discuss the consultant’s variation on the story. This is less common, because generally smart consultants don’t comment on the security of their consultees. In this case, it doesn’t seem like the consultant’s report was leaked, but people are discussing it after a high-profile issue.

In brief, the DNC was hacked, probably by Russian intelligence, and emails were given to Wikileaks. Wikileaks published them without redacting things like credit card numbers or social security numbers. The head of the DNC has stepped down. (This is an unusual instance of someone losing their job, which is rare post-breach. However, she did not lose her job because of the breach, she lost it because the breach included information about how her organization tilted the playing field, and how she lied about doing so.)

This story captures a set of archetypes. I want to use this story as a foil for those archetypes, not to critique any of the parties. I’ll follow the pattern from “employess vs company” present those three sections: “I told you so”, “potential spending”, and “how to do better.” I also comments on preventability and “shame.”

Was it preventable?

Computer security consultants hired by the DNC made dozens of recommendations after a two-month review, the people said. Following the advice, which would typically include having specialists hunt for intruders on the network, might have alerted party officials that hackers had been lurking in their network for weeks… (“Democrats Ignored Cybersecurity Warnings Before Theft,” Michael Riley, Bloomberg.)

People are talking about this as if the DNC was ever likely to stop Russian intelligence from breaking into their computers. That’s a very, very challenging goal, one at which both US and British intelligence have failed. (And as I write this, an FBI agent has been charged with espionage on behalf of China.) There’s a lot of “might,” “could have,” and other words that say “possible” without assessing “probable.”

I told you so!

The report included “dozens of recommendations,” some of which, such as “taking special precautions to protect any financial information related to donors” might be a larger project than a PCI compliance initiative. (The logic is that financial information collected by a party is more than just card numbers; there seems to be a lot of SSNs in the data as well). If one recommendation is “get PCI compliant,” than “dozens of recommendations” might be a Sysyphean task, or perhaps the Agean Stables are a better analogy. In either case, only in mythology can the actions be completed.

Missing from the discussion I’ve seen so far is any statement of what was done. Did the organization do the top-5 things the consultants said to do? (Did they even break things out into a top-5?)

Potential Spending

The review found problems ranging from an out-of-date firewall to a lack of advanced malware detection technology on individual computers, according to two of the people familiar with the matter.

It sounds like “advanced malware detection technology” would be helpful here, right? Hindsight is 20:20. An out-of-date firewall? Was it missing security updates (which would be worrisome, but less worrisome than one might think, depending on what those updates fix), or was it just not the latest revision? If it’s not the latest revision, it can probably still do its job. In carefully reading the article, I saw no evidence that any single recommendation, or even implementing all of them, would have prevented the breach.

The DNC is a small organization. They were working with a rapidly shifting set of campaign workers working for the Sanders and Clinton campaigns. I presume they’re also working on a great many state races, and the organizations those politicians are setting up.

I do not believe that doing everything in the consultant’s report could reasonably be expected to prevent a breakin by a determined mid-sized intelligence agency.


“Shame on them. It looks like they just did the review to check a box but didn’t do anything with it,” said Ann Barron-DiCamillo, who was director of US-Cert, the primary agency protecting U.S. government networks, until last February. “If they had acted last fall, instead of those thousands of e-mails exposed it might have been much less.”

Via Meredith Patterson, I saw “The Left’s Self-Destructive Obsession with Shame,” and there’s an infosec analog. Perhaps they would have found the attackers if they’d followed the advice, perhaps not. Does adding shame work to improve the cybers? If it did, it should have done so by now.

How to do better

I stand by what I said last time. The organization has paid the PR price, and we have learned nothing. What a waste. We should talk about exactly what happened at a technical level.

We should stop pointing fingers and giggling. It isn’t helping us. In many ways, the DNC is not so different from thousands of other small or mid-size organizations who hold sensitive information. Where is the list of effective practice for them to follow? How different is the set of recommendations in this instance from other instances? Where’s the list of “security 101” that the organization should have followed before hiring these consultants? (My list is at “Security 101: Show Your List!.”)

We should define the “smoke alarms and sprinklers” of cyber. Really, why does an organization like the DNC need to pay $60,000 for a cybersecurity policy? It’s a little hard to make sense of, but I think that the net operating expenditures of $5.1m is a decent proxy for the size of the organization, and (if that’s right) 1% of their net operating expenses went to a policy review. Was that a good use of money? How else should they have spent that? What concrete things do we, as a profession or as a community, say they should have done? Is there a reference architecture, or a system in a box which they could have run that we expect would resist Russian intelligence?

We cannot expect every small org to re-invent this wheel. We have to help them better.

Dear Mr. President

by adam on July 14, 2016

U.S. President Barack Obama says he’s ”concerned” about the country’s cyber security and adds, ”we have to learn from our mistakes.”

Dear Mr. President, what actions are we taking to learn from our mistakes? Do we have a repository of mistakes that have been made? Do we have a “capability” for analysis of these mistakes? Do we have a program where security experts can gain access to the repository, to learn from it?

I’ve written extensively on this problem, here on this blog, and in the book from which it takes its name. We do not have a repository of mistakes. We do not have a way to learn from those mistakes.

I’ve got to wonder why that is, and what the President thinks we’re doing to learn from our mistakes. I know he has other things on his mind, and I hope that our officials who can advise him directly take this opportunity to say “Mr. President, we do not learn from our mistakes.”

(Thanks to Chris Wysopal for the pointer to the comment.)

A New Way to Tie Security to Business

by adam on June 20, 2016

A woman with a chart saying that's ok, i don't know what it means either

As security professionals, sometimes the advice we get is to think about the security controls we deploy as some mix of “cloud access security brokerage” and “user and entity behavioral analytics” and “next generation endpoint protection.” We’re also supposed to “hunt”, “comply,” and ensure people have had their “awareness” raised. Or perhaps they mean “training,” but how people are expected to act post-training is often maddeningly vague, or worse, unachievable. Frankly, I have trouble making sense of it, and that’s before I read about how your new innovative revolutionary disruptive approach is easy to deploy to ensure that APT can’t get into my network to cloud my vision.

I’m making a little bit of a joke, because otherwise it’s a bit painful to talk about.

Really, we communicate badly. It hurts our ability to drive change to protect our organizations.

A CEO once explained his view of cyber. He said “security folks always jump directly into details that just aren’t important to me. It’s as if I met a financial planner and he started babbling about a mutual fund’s beta before he understood what my family needed.” It stuck with me. Executives are generally smart people with a lot on their plates, and they want us, as security leaders, to make ourselves understood.

I’ve been heads down with a small team, building a new kind of risk management software. It’s designed to improve executive communication. Our first customers are excited and finding that it’s changing the way they engage with their management teams. Right now, we’re looking for a few more forward-looking organizations that want to improve their security, allocate their resources better and link what they’re doing to what the business needs.

If you’re a leader at such a company, please send me an email [first]@[last].org, leave a comment or reach out via linkedin.

Sneak peeks at my new startup at RSA

by adam on February 18, 2016


Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.