by adam on March 29, 2013
While everyone else is talking about APT, I want to talk about risk thinking versus outcome thinking.
I have a lot of colleagues who I respect who like to think about risk in some fascinating ways. For example, there’s the Risk Hose and SIRA folks.
I’m inspired by
To Encourage Biking, Cities Lose the Helmets:
In the United States the notion that bike helmets promote health and safety by preventing head injuries is taken as pretty near God’s truth. Un-helmeted cyclists are regarded as irresponsible, like people who smoke. Cities are aggressive in helmet promotion.
But many European health experts have taken a very different view: Yes, there are studies that show that if you fall off a bicycle at a certain speed and hit your head, a helmet can reduce your risk of serious head injury. But such falls off bikes are rare — exceedingly so in mature urban cycling systems.
On the other hand, many researchers say, if you force or pressure people to wear helmets, you discourage them from riding bicycles. That means more obesity, heart disease and diabetes. And — Catch-22 — a result is fewer ordinary cyclists on the road, which makes it harder to develop a safe bicycling network. The safest biking cities are places like Amsterdam and Copenhagen, where middle-aged commuters are mainstay riders and the fraction of adults in helmets is minuscule.
“Pushing helmets really kills cycling and bike-sharing in particular because it promotes a sense of danger that just isn’t justified.
Given that we don’t have statistics about infosec analogs to head injuries, nor obesity, I’m curious where can we make the best infosec analogy to bicycling and helmets? Where are our outcomes potentially worse because we focus on every little risk?
My favorite example is password change policies, where we absorb substantial amounts of everyone’s time without evidence that they’ll improve our outcomes.
by adam on March 25, 2013
So I was listening to the Shmoocon presentation on information sharing, and there was a great deal of discussion of how sharing too much information could reveal to an attacker that they’d been detected. I’ve discussed this problem a bit in “The High Price of the Silence of Cyberwar,” but wanted to talk more about it. What struck me is that the audience seemed to be thinking that an MD5 of a bit of malware was equivalent to revealing the Ultra intelligence taken from Enigma decrypts.
Now perhaps that’s because I’m re-reading Neal Stephenson’s Cryptonomicon, where one of the subplots follows the exploits of Unit 2702, dedicated to ensuring that use of Ultra is explainable in other ways.
But really, it was pretty shocking to hear people nominally dedicated to the protection of systems actively working to deny themselves information that might help them detect an intrusion faster and more effectively.
For an example of how that might work, read “Protecting People on Facebook.” First, let me give kudos to Facebook for revealing an attack they didn’t have to reveal. Second, Facebook says “we flagged a suspicious domain in our corporate DNS logs.” What is a suspicious domain? It may or may not be one not seen before. More likely, it’s one that some other organization has flagged as malicious. When organizations reveal the IP or domain names of command and control servers, it gives everyone a chance to learn if they’re compromised. It can have other positive effects. Third, it reveals a detection method which actually caught a bad guy, and that you might or might not be using. Now you can consider if you want to invest in dns logging.
Now, there’s a time to be quiet during incident response. But there’s very real a tradeoff to be made between concealing your knowledge of a breach and aiding and abetting other breaches.
Maybe it’s time for us to get angry when a breach disclosure doesn’t include at least one IP and one MD5? Because when the disclosure doesn’t include those facts, our ability to defend ourselves is dramatically reduced.
by adam on March 22, 2013
This week I have experienced an echo of this pattern at the 2013 WEF meeting. But this time my unease does not revolve around any financial threats, but another issue – cyber security.
[The] crucial point is this: even if some companies are on top of the issue, others are not, and without more public debate, it will be tough to get boards to act. Without more disclosure it will also be difficult for investors to start pricing in these risks. So it is high time shareholders began demanding more information from companies about the issue – not just about the scale of the cyber attacks, but also the moves being taken to fend them off.
And if companies refuse to answer, then shareholders – or the government – should ask them why. After all, if there is one thing we learnt from 2007, it is that maintaining an embarrassed silence about risks does not usually make them go away; least of all when there is potential damage to consumers (and investors) as well as the companies under attack.
So writes Gillian Tett in the Financial Times, “Time to break wall of silence on escalating cyber attacks”
Thanks to Russell Thomas for the pointer.
by Russell on March 18, 2013
One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly. What if we had solid evidence to use in breach impact estimation? This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.
It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders. These activities can included both routine incident response and also more rare activities. (See our paper for more.) This leads to to ‘Indicators of Impact’, which are evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper):
An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.
Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event. There’s a community supporting an open exchange format — OpenIoC. The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security. In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.
From the Appendix B, here are a few examples:
- Was there a forensic investigation, above and beyond what your organization would normally do?
- Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
- Was any significant business process or function disrupted for a significant amount of time?
- Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
- Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?
The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating. For example, there will be indicators that only apply to certain industries or organization types. In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.
As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence. However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events. In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.
There are also applications beyond breach impact estimation. For example, they could be used in resilience planning and preparation. They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)
Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS. I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework. There’s a public meeting April 3rd, and I might fly out for it. But I will submit to the NIST RFI.
Your thoughts and comments?
by Russell on March 17, 2013
Adam just posted a question about CEO “willingness to pay” (WTP) to avoid bad publicity regarding a breach event. As it happens, we just submitted a paper to Workshop on the Economics of Information Security (WEIS) that proposes a breach impact estimation method that might apply to Adam’s question. We use the WTP approach in a specific way, by posing this question to all affected stakeholders:
“Ex ante, how much would you be willing to spend on response and recovery for a breach of a particular type? Through what specific activities and processes?”
We hope this approach can bridge theoretical and empirical research, and also professional practice. We also hope that this method can be used in public disclosures.
In the next few months we will be applying this to half a dozen historical breach episodes to see how it works out. This model will also probably find its way into my dissertation as “substrate”. The dissertation focus is on social learning and institutional innovation.
Comments and feedback are most welcome.
by adam on March 15, 2013
We all know how companies don’t want to be named after a breach. Here’s a random question: how much is that worth to a CEO? What would a given organization be willing to pay to keep its name out of the press? (A-priori, with at best a prediction of how the press will react.) Please don’t say a lot, please help me quantify it.
Another way to ask this question: What should a business be willing to pay to not report a security breach?
(Bonus question: how is it changing over time?)
by adam on February 26, 2013
So this week is RSA, and I wanted to offer up some advice on how to engage. I’ve already posted my “BlackHat Best Practices/Survival kit.
First, if you want to ask great questions, pay attention. There are things more annoying than a question that was answered while the questioner was tweeting, but you still don’t want to be that person.
Second, if you want to ask a good question, ask a question that you think others will want to hear answered. If your question is narrow, go up to the speaker afterwards.
Now, there are some
generic best practice questions that I love to ask, and want to encourage you to ask.
- You claimed “X”, but didn’t explain why. Could you briefly cover your methodology and data for that claim?
- You said “X” is a best practice. Can you cover what practices you would cut to ensure there’s resources available to do “X”?
- You said “if you get breached, you’ll go out of business. Last year, 2600 companies announced data breaches. How many of them are out of business?”
- You said that “X” dramatically increased your organization’s security. Since we live in an era of ‘assume breach’, can I assume that your organization is now committed to publishing details of any breaches that happen despite X?
I’m sure there’s other good questions, please share your favorites, and I’ll try for a new post tomorrow.
by adam on February 22, 2013
It currently has 4 entries, 3 of which are dramatically in favor of more disclosure. I’m personally fond of Lee Tien’s “
We Need Better Notification Laws.”
My personal preference is of course (ahem) fascinating to you, since you’re reading this blog, but more seriously, it’s not what I expect anyone else to find interesting.
What’s interesting to me is that the only person who they could find to say no is Alexander Tabb, whose bio states that he “is a partner at TABB Group, a capital markets research and consulting firm.” I don’t want to insult Mr Tabb, and so found a fuller bio here, which includes “Mr. Tabb is an expert in the field of international affairs, with specialization in the developing world, crisis management, international security, supply chain security and travel safety and security. He joined Tabb Group in October 2004. [From 2001 to 2004] Mr. Tabb served as an Associate Managing Director of Security Services Group of Kroll Inc., the international risk consulting company.”
I find it fascinating that someone of his background is the naysayer. Perhaps the Times was unable to find anyone practicing in information security to claim that companies should not tell us when they’ve been hacked?
by adam on February 21, 2013
Law firm Proskauer has published a client alert that “HHS Issues HIPAA/HITECH Omnibus Final Rule Ushering in Significant Changes to Existing Regulations.” Most interesting to me was the breach notice section:
Section 13402 of the HITECH Act requires covered entities to
provide notification to affected individuals and to the Secretary of
HHS following the discovery of a breach of unsecured protected
health information. HITECH requires the Secretary to post on an
HHS Web site a list of covered entities that experience breaches of
unsecured protected health information involving more than 500
individuals. The Omnibus Rule substantially alters the definition of
breach. Under the August 24, 2009 interim final breach notification
rule, breach was defined as the “acquisition, access, use, or
disclosure of protected health information in a manner not permitted
under [the Privacy Rule] which compromises the security or privacy
of the protected health information.” The phrase “compromises the
security or privacy of [PHI]” was defined as “pos[ing] a significant risk
of financial, reputational, or other harm to the individual.”
According to HHS, “some persons may have interpreted the risk of
harm standard in the interim final rule as setting a much higher
threshold for breach notification than we intended to set. As a result
we have clarified our position that breach notification is necessary in
all situations except those in which the covered entity or business
associate, as applicable, demonstrates that there is a low probability
that the protected health information has been compromised. . . .”
The client alert goes on to lay out the four risk factors that must be considered.
I’m glad to see this. The prior approach has been a full employment act for lawyers, and a way for organizations to weasel out of their ethical and legal obligations. We are likely to see more regulatory updates of this form, despite intensive lobbying.
If organizations want a different risk threshold, it’s up to them to propose one that’s credible to regulators and the public.
by adam on February 18, 2013
We were hacked again.
The vuln used was 0day, and has now been patched, thanks to David Mortman and Matt Johansen, and the theme has also been updated, thanks to Rodrigo Galindez. Since we believe in practicing the transparency we preach, I wanted to discuss what happened and some options we considered.
Let me dispense with the markety-speak.
Alun Jones found an XSS attack, and let us know about it discretely. It’s tempting to throw around words like 0day because it makes us seem less lame. Actually, it’s tempting because it makes me seem less lame.
As I’ve said before, we run this blog on the cheap as a way to share ideas. We don’t have any income here, and that means that we use free resources like WordPress and Modernist. We could take money out of our beer budget or time away from our families to run security scans, but haven’t.
This is much like many organizations. They have limited infosec budget. There’s always more you could be doing, and in hindsight, probably should have been doing, but identifying it advance is tough because we don’t know how compromises tend to happen.
I gave serious consideration to announcing the vuln before we fixed it, to enable people to make risk management decisions. I decided against that on two grounds. The first and more important was that we’d be exposing the other folks who use the theme to risk that they might not be set up to respond to. The second was that in our case, the impact seems relatively constrained. We work hard to ensure you don’t need to run code to read our blog, and I’d be shocked to discover that anyone making security choices with things like NoScript or Trusted Zones has this blog in such a whitelist.
If you’ve made the decision to let this blog run code, I recommend you fix that, because we are not investing in securing our site in line with that expectation. If you’re a security pro using Windows, I urge you to use EMET, and in any event to limit where your browser will run code to a carefully selected whitelist.
Anyway, back to the vuln. We’re a little disappointed to not be targeted by this Java 0day. We’d feel much better if this was “serious” 0day. But you know what? This blog could be pwned and used to distribute that Java stuff. And XSS is serious, even if it is common.
One option we gave serious consideration was “offensive security.” We have chosen to not hack back, but if we did, we do not believe we owe a duty of confidentiality to other “victims” of this hacking spree. (We don’t know how many victims Alun has, but we bet it’s a lot more than fit on a postcard.) We would believe that there’s a reasonable public interest served by naming those victims, so that their shareholders can assess if the breaches are material and should have been disclosed.