by adam on April 11, 2013
On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels:
There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP number is perhaps lower than “common wisdom” in the security community would have it.) There are also numbers for the success rates of exploits, ranging from Java OBE at 35% down to MDAC at 1.85%.
I’m fascinated by these numbers, and have two questions:
- Is anyone capturing the statistics shown and running statistics over time?
- Is there an aggregation of all these captures? If not, what are the best search terms to find them?
by adam on April 10, 2013
Thanks to Addison Wesley, who are offering 40% off the book. Apply code NEWSCHOOL40 to get your discounted copy. (You apply the code after proceeding to checkout.)
by Russell on April 9, 2013
As it happens, both the US Government and the UK government are leading “cyber security standards framework” initiatives right now. The US is using a consensus process to “incorporate existing consensus-based standards to the fullest extent possible”, including “cybersecurity standards, guidelines, frameworks, and best practices” and “conformity assessment programs”. In contrast, the UK is asking for evidence that any proposed standard or practice is beneficial or even “best”.
The Brits are doing it right. I hope the US follows their lead.
by adam on April 8, 2013
Five years ago Friday was the official publication date of The New School of Information Security. I want to take this opportunity to look back a little and look forward to the next few years.
Five years ago, fear of a breach and its consequences was nearly universal, and few people thought anything but pain would come of talking about our problems. Many people found it shocking when we challenged best practices, or asked if there was evidence for the ways we invested in security. I’d like to think we played some small role in how the culture of information security has changed. I’m hopeful that culture will continue to evolve in ways that focus on outcomes and data about those outcomes. At the same time, as I reflect, I go back to what Andrew and I wrote.
We wrote that the New School of Information Security is:
- Learning from other professions, such as economics and psychology; to unlock the problems that stymie the security field. The way forward cannot be found solely in mathematics or technology.
- Sharing objective data and analysis widely. A fetish for secrecy has held us back.
- The embrace of the scientific method for solving important security problems. Analyzing real-world outcomes is the best way for information security to become a mature discipline.
We’ve seen tremendous movement in the sharing of objective data. From the DBIR to Mandiant’s report to revelations from Google, RSA, Bit9 and many others, we see people willing to talk about what went wrong. Sure, they sometimes add some spin, but that’s human nature. We’re seeing data being shared, or as I now like to say, published. We can’t take credit for that. Lots of people did a lot of hard work to convince their organizations to publish that data, and we’re learning from it and collections like the Open Security Foundation’s dataset.
We’ve also heard from countless folks about how much they liked the book, how it’s influenced their thinking and their actions, and that’s been a wonderful return on our work.
What we haven’t seen as much of is learning from other professions, such as economics and psychology. It’s still to common to complain that people will click on anything, we still argue with a paucity of data about if training people makes any sense. (Although if you have any data, I’d love to get it some attention at BlackHat.)
We also haven’t yet seen a lot of published data on the effectiveness of various security investments. As far as I know, no compliance regime yet requires breached entities to report back to those who create the standard about what went wrong, perpetuating the wicked environment in which we work, and wasting the time and money of those who need to comply.
Sadly, the pervious two paragraphs relate to what we wrote in chapters 5 and 6. For those of you who enjoyed the book, let me ask you to re-read them. For those of you who haven’t yet read it, now’s a great time. [Update: Even better, Addison Wesley is offering 40% off with code NEWSCHOOL40 to help us celebrate! Apply the code after proceeding to checkout.]
Andrew and I remain optimistic that our world can get better, and we’re proud to have helped illuminate a path forward.
by adam on April 3, 2013
According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”
Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)
With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.
So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.
Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”
However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.
Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.
Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.
It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.
by adam on April 1, 2013
Hacking humans is an important step in today’s exploitation chains. From “2011 Recruitment plan.xls” to instant messenger URL delivery at the start of Aurora, the human in the loop is being exploited just as much as the machine. In fact, with the right story, you might not even need an exploit at all.
So I’m looking to be able to put together an awesome track on hacking humans for Black Hat USA 2013. I’d love work on things like:
- Unusable security or privacy (preferably with user studies)
- The cognitive science of attention
- Conditioned-safe ceremonies
- Measuring the effects of security awareness training
- Human compliance budgets
- Threat modeling techniques for user interfaces
- What any of these attacks might teach defenders about user interface design.
- Engineering for real world human error
- New ceremony analytic techniques
- New frameworks for thinking about hacking humans, or defending against attacks on people
- This list is incomplete
At BlackHat we like talks about hacking stuff. We like technical talks. We don’t like pure theory, without demonstrated application. We don’t have talks about getting a UPS uniform.
If you have such content, I encourage you to check out the Black Hat Call for Papers and consider submitting by April 15th.
by adam on March 29, 2013
While everyone else is talking about APT, I want to talk about risk thinking versus outcome thinking.
I have a lot of colleagues who I respect who like to think about risk in some fascinating ways. For example, there’s the Risk Hose and SIRA folks.
I’m inspired by
To Encourage Biking, Cities Lose the Helmets:
In the United States the notion that bike helmets promote health and safety by preventing head injuries is taken as pretty near God’s truth. Un-helmeted cyclists are regarded as irresponsible, like people who smoke. Cities are aggressive in helmet promotion.
But many European health experts have taken a very different view: Yes, there are studies that show that if you fall off a bicycle at a certain speed and hit your head, a helmet can reduce your risk of serious head injury. But such falls off bikes are rare — exceedingly so in mature urban cycling systems.
On the other hand, many researchers say, if you force or pressure people to wear helmets, you discourage them from riding bicycles. That means more obesity, heart disease and diabetes. And — Catch-22 — a result is fewer ordinary cyclists on the road, which makes it harder to develop a safe bicycling network. The safest biking cities are places like Amsterdam and Copenhagen, where middle-aged commuters are mainstay riders and the fraction of adults in helmets is minuscule.
“Pushing helmets really kills cycling and bike-sharing in particular because it promotes a sense of danger that just isn’t justified.
Given that we don’t have statistics about infosec analogs to head injuries, nor obesity, I’m curious where can we make the best infosec analogy to bicycling and helmets? Where are our outcomes potentially worse because we focus on every little risk?
My favorite example is password change policies, where we absorb substantial amounts of everyone’s time without evidence that they’ll improve our outcomes.
by adam on March 25, 2013
So I was listening to the Shmoocon presentation on information sharing, and there was a great deal of discussion of how sharing too much information could reveal to an attacker that they’d been detected. I’ve discussed this problem a bit in “The High Price of the Silence of Cyberwar,” but wanted to talk more about it. What struck me is that the audience seemed to be thinking that an MD5 of a bit of malware was equivalent to revealing the Ultra intelligence taken from Enigma decrypts.
Now perhaps that’s because I’m re-reading Neal Stephenson’s Cryptonomicon, where one of the subplots follows the exploits of Unit 2702, dedicated to ensuring that use of Ultra is explainable in other ways.
But really, it was pretty shocking to hear people nominally dedicated to the protection of systems actively working to deny themselves information that might help them detect an intrusion faster and more effectively.
For an example of how that might work, read “Protecting People on Facebook.” First, let me give kudos to Facebook for revealing an attack they didn’t have to reveal. Second, Facebook says “we flagged a suspicious domain in our corporate DNS logs.” What is a suspicious domain? It may or may not be one not seen before. More likely, it’s one that some other organization has flagged as malicious. When organizations reveal the IP or domain names of command and control servers, it gives everyone a chance to learn if they’re compromised. It can have other positive effects. Third, it reveals a detection method which actually caught a bad guy, and that you might or might not be using. Now you can consider if you want to invest in dns logging.
Now, there’s a time to be quiet during incident response. But there’s very real a tradeoff to be made between concealing your knowledge of a breach and aiding and abetting other breaches.
Maybe it’s time for us to get angry when a breach disclosure doesn’t include at least one IP and one MD5? Because when the disclosure doesn’t include those facts, our ability to defend ourselves is dramatically reduced.
by adam on March 22, 2013
This week I have experienced an echo of this pattern at the 2013 WEF meeting. But this time my unease does not revolve around any financial threats, but another issue – cyber security.
[The] crucial point is this: even if some companies are on top of the issue, others are not, and without more public debate, it will be tough to get boards to act. Without more disclosure it will also be difficult for investors to start pricing in these risks. So it is high time shareholders began demanding more information from companies about the issue – not just about the scale of the cyber attacks, but also the moves being taken to fend them off.
And if companies refuse to answer, then shareholders – or the government – should ask them why. After all, if there is one thing we learnt from 2007, it is that maintaining an embarrassed silence about risks does not usually make them go away; least of all when there is potential damage to consumers (and investors) as well as the companies under attack.
So writes Gillian Tett in the Financial Times, “Time to break wall of silence on escalating cyber attacks”
Thanks to Russell Thomas for the pointer.
by Russell on March 18, 2013
One big problem with existing methods for estimating breach impact is the lack of credibility and reliability of the evidence behind the numbers. This is especially true if the breach is recent or if most of the information is not available publicly. What if we had solid evidence to use in breach impact estimation? This leads to the idea of “Indicators of Impact” to provide ‘ground truth’ for the estimation process.
It is premised on the view that breach impact is best measured by the costs or resources associated with response, recovery, and restoration actions taken by the affected stakeholders. These activities can included both routine incident response and also more rare activities. (See our paper for more.) This leads to to ‘Indicators of Impact’, which are evidence of the existence or non-existence of these activities. Here’s a definition (p 23 of our paper):
An ‘Indicator of Impact’ is an observable event, behavior, action, state change, or communication that signifies that the breached or affected organizations are attempting to respond, recover, restore, rebuild, or reposition because they believe they have been harmed. For our purposes, Indicators of Impact are evidence that can be used to estimate branching activity models of breach impact, either the structure of the model or key parameters associated with specific activity types. In principle, every Indicator of Impact is observable by someone, though maybe not outside the breached organization.
Of course, there is a close parallel to the now-widely-accepted idea of “Indicators of Compromise“, which are basically technical traces associated with a breach event. There’s a community supporting an open exchange format — OpenIoC. The big difference is that Indicators of Compromise are technical and are used almost exclusively in tactical information security. In contrast, Indicators of Impact are business-oriented, even if they involve InfoSec activities, and are used primarily for management decisions.
From the Appendix B, here are a few examples:
- Was there a forensic investigation, above and beyond what your organization would normally do?
- Was this incident escalated to the executive level (VP or above), requiring them to make resource decisions or to spend money?
- Was any significant business process or function disrupted for a significant amount of time?
- Due to the breach, did the breached organization fail to meet any contractual obligations with it customers, suppliers, or partners? If so, were contractual penalties imposed?
- Were top executives or the Board significantly diverted by the breach and aftermath, such that other important matters did not receive sufficient attention?
The list goes on for three pages in Appendix B but we fully expect it to grow much longer as we get experience and other people start participating. For example, there will be indicators that only apply to certain industries or organization types. In my opinion, there is no reason to have a canonical list or a highly structured taxonomy.
As signals, the Indicators of Impact are not perfect, nor do they individually provide sufficient evidence. However, they have the very great benefit of being empirical, subject to documentation and validation, and potentially observable in many instances, even outside of InfoSec breach events. In other words, they provide a ‘ground truth’ which has been sorely lacking in breach impact estimation. When assembled as a mass of evidence and using appropriate inference and reasoning methods (e.g. see this great book), Indicators of Impact could provide the foundation for robust breach impact estimation.
There are also applications beyond breach impact estimation. For example, they could be used in resilience planning and preparation. They could also be used as part of information sharing in critical infrastructure to provide context for other information regarding threats, attacks, etc. (See this video of a Shmoocon session for a great panel discussion regarding challenges and opportunities regarding information sharing.)
Fairly soon, it would be good to define a light-weight standard format for Indicators of Impact, possibly as an extension to VERIS. I also think that Indicators of Impact could be a good addition to the up-coming NIST Cybersecurity Framework. There’s a public meeting April 3rd, and I might fly out for it. But I will submit to the NIST RFI.
Your thoughts and comments?