by adam on January 5, 2015
Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”
Now, I’m unsure if that’s “security 101” or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.
What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.
This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.
So I’m going to make three requests for 2015:
- If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
- If you’re a reporter and someone tells you “X is security 101” please ask them for their list.
- Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.
Oh, and since it’s sauce for the gander, here’s my list for individuals:
- Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
- Use a firewall that blocks most inbound traffic.
- Ensure you have a working backup of your data.
(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)
*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.
[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]
by adam on December 15, 2014
When people don’t take their drugs as prescribed, it’s for very human reasons.
Typically they can’t tolerate the side effects, the cost is too high, they don’t perceive any benefit, or they’re just too much hassle.
Put these very human (and very subjective) reasons together, and they create a problem that medicine refers to as non-adherence. It’s an awkward term that describes a daunting problem: about 50% of people don’t take their drugs as prescribed, and this creates some huge downstream costs. Depending how you count it, non-adherence racks up between $100 billion and $280 billion in extra costs – largely due to a condition worsening and leading to more expensive treatments down the line.
So writes “Getting People To Take Their Medicine.” But Thomas Goetz is not simply griping about the problem, he’s presenting a study of ways to address it.
That’s important because we in information security also ask people to do things, from updating their software to trusting certain pixels and not other visually identical pixels, and they don’t do those things, also for very human reasons.
His conclusion applies almost verbatim to information security:
So we took especial interest in the researcher’s final conclusion: “It is essential that researchers stop re-inventing the poorly performing ‘wheels’ of adherence interventions.” We couldn’t agree more. It’s time to stop approaching adherence as a clinical problem, and start engaging with it as a human problem, one that happens to real people in their real lives. It’s time to find new ways to connect with people’s experiences and frustrations, and to give them new tools that might help them take what the doctor ordered.
If only information security’s prescriptions were backed by experiments as rigorous as clinical trials.
(I’ve previously shared Thomas Goetz’s work in “Fear, Information Security, and a TED Talk“)
by adam on December 1, 2014
I’ve been threat modeling for a long time, and at Microsoft, had the lovely opportunity to put some rigor into not only threat modeling, but into threat modeling in a consistent, predictable, repeatable way. Because I did that work at Microsoft, sometimes people question how it would work for a startup, and I want to address that from having done it. I’ve done threat modeling for startups like Zero Knowledge (some of the output is in the Freedom security whitepaper), and while a consultant or advisor, for other startups who decided against the open kimono approach.
Those threat modeling sessions have taken anywhere from a few hours to a few days, and the same techniques that we developed to make threat modeling more effective can be used to make it more efficient. One key aspect is to realize that what security folks call threat modeling is comprised of several related tasks, and each of them can be done in different ways. Even more important, only one of the steps is purely focused on threats.
I’ve seen a four-question framework work at all sorts of organization. The four questions are: “what are we building/deploying,” “what could go wrong”, “what are we going to do about it” and “did we do an acceptable job at this?”
The “what are we building” question is about creating a shared model of the system. This can be done on a napkin or a whiteboard, or for a large complex system which has grown organically, can involve people working for months. In each case, there’s value beyond security. Your code is cleaner, more consistent, more maintainable, and sometimes even faster when everyone agrees on what’s running where. Discussing trust boundaries can be quick and easy, and understanding them can lead to better marshaling and fewer remote calls. Of course, if your code is spaghetti and no one’s ever bothered to whiteboard it, this step can involve a lot of discussion and debate, because it will lead to refactoring. Maybe you can launch without that refactoring, maybe it’s important to do it now. It can be hard to tell if no one knows how your system is put together.
Understanding “what could go wrong” can happen in as little as an hour for people new to security using the Elevation of Privilege game. You should go breadth-first here, getting an understanding of where the threats might cluster, and if you need to dig deeper.
What are you going to do about it? You’re going to track issues as bugs. (Startups should not spend energy thinking about bugs and flaws as two separate things with their own databases.) If your bugs are post-its, that’s great. If you’re using a database, fine. Just track the issues. Some of them will get turned into tests. If you’re using Test-Driven Development, threat modeling is a great way to help you think about security testing and where it needs to focus. (If you’re not using TDD, it’s still a great way to focus your security testing.)
Total time spent for a startup can be as low as a few hours, especially if you’ve been thinking about software architecture and boundaries along the way. I’ve never spent more than a few days on threat modeling with a small company, and never heard it called a poor use of time. Of course, it’s easy to say that each thing that someone wants a startup to do “only takes a little time,” but all those things can add up to a substantial distraction from the core product. However, a media blow-up or a Federal investigation can also distract you, and distract you for a lot longer. A few hours of threat modeling can help you avoid the sorts of problems that distracted Twitter or Snapchat. The landscape continues to change rapidly post-Snowden, and threat modeling is the fastest way to consider what security investments a startup should be making.
by adam on November 17, 2014
Simson Garfinkel and Heather Lipford’s Usable Security: History, Themes, and Challenges should be on the shelf of anyone who is developing software that asks people to make decisions about computer security.
We have to ask people to make decisions because they have information that the computer doesn’t. My favorite example is the Windows “new network” dialog, which asks what sort of network you’re connecting to..work, home or coffee shop. The information is used to configure the firewall. My least favorite example is phishing, where people are asked to make decisions about technical minutiae before authenticating. Regardless, we are not going to entirely remove the need for people to make decisions about computer security. So we can either learn to gain their participation in more effective ways, or we can accept a very high failure rate. The former option is better, and this book is a substantial contribution.
It’s common for designers to throw up their hands at these challenges, saying things like “given a choice between security and dancing babies, people will choose dancing babies every time,” or “you can’t patch human stupidity.” However, in a recently published study by Google and UCSD, they found that the best sites only fooled 45% of the people who clicked through, while overall only 13% did. (There’s a good summary of that study available.) Claiming that “people will choose dancing babies 13% of the time” just doesn’t seem like a compelling argument against trying.
This slim book is a review of the academic work that’s been published, almost entirely in the last 20 years, on how people interact with information security systems. It summarizes and contextualizes the many things we’ve learned, mistakes that have been made, and does so in a readable and concise way. The book has six chapters:
- A brief history
- Major Themes in UPS Academic Research
- Lessons Learned
- Research Challenges
- Conclusion/The Next Ten Years
The “Major themes” chapter is 61 or so pages, which is over half of the 108 pages of content. (The book also has 40 pages of bibliography). Major themes include authentication, email security and PKI, anti-phishing, storage, device pairing, web privacy, policy specification, mobile, social media and security administration.
The “Lessons Learned” chapter is quite solid, covering “reduce decisions,” “safe and secure defaults,” “provide users with better information, not more information,” “users require clear context to make good decisions,” “information presentation is critical” and “education works but has limits.” I have a quibble, which is Sasse’s concept of mental ‘compliance budgets’ is also important, and I wish it were given greater prominence. (My other quibble is more of a pet peeve: the term “user” where “people” would serve. Isn’t it nicer to say “people require clear context to make good decisions”?) Neither quibble should take away from my key message, which is that this is an important new book.
The slim nature of the book is, I believe, an excellent usability property. The authors present what’s been done, lessons that they feel can be taken away, and move to the next topic. This lets you the reader design, build or deploy systems which help the person behind the keyboard make the decisions they want to make. To re-iterate, anyone building software that asks people to make decisions should read the lessons contained within.
Disclaimer: I was paid to review a draft of this book, and my name is mentioned kindly in the acknowledgements. I am not being paid to write or post reviews.
[Updated to correct the sentence about the last 20 years.]
by adam on November 11, 2014
There are a number of reports out recently, breathlessly presenting their analysis of one threatening group of baddies or another. You should look at the reports for facts you can use to assess your systems, such as filenames, hashes and IP addresses. Most readers should, at most, skim their analysis of the perpetrators. Read on for why.
There are a number of surface reasons that you might reject or ignore these reports. For example, these reports are funded by marketing. Even if they are, that’s not a reason to reject them. The baker does not bake bread for fun, and the business goal of marketing can give us useful information. You might reject them for their abuse of adjectives like “persistent”, “stealthy”, or “sophisticated.” (I’m tempted to just compile a wordcloud and drop it in place of writing.) No, the reason to only skim these is what the analysis does to the chance of your success. There are two self-inflicted wounds that often happen when people focus on attackers:
- You miss attackers
- You misunderstand what the attackers will do
You may get a vicarious thrill from knowing who might be attacking you, but that very vicarious thrill is likely to make those details available to your conscious mind, or anchor your attention on them, causing you to miss other attackers. Similarly, you might get attached to the details of how they attacked last year, and not notice how those details change.
Now, you might think that your analysis won’t fall into those traps, but let me be clear: the largest, best-funded analysis shops in the world routinely make serious and consequential mistakes about their key areas of responsibility. The CIA didn’t predict the collapse of the Soviet Union, and it didn’t predict the rise of ISIS.
If your organization believes that it’s better at intelligence analysis than the thousands of people who work in US intelligence, then please pay attention to my raised eyebrow. Maybe you should be applying that analytic awesomesauce to your core business, maybe it is your core business, or maybe you should be carefully combing through the reports and analysis to update your assessments of where these rapscallions shall strike next. Or maybe you’re over-estimating your analytic capabilities.
Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.
The indicators that can help you find the successful attacks are an important value from these reports, and that’s what you should use them for. Don’t get distracted by the motivations.
by adam on October 7, 2014
There’s a recurring theme in data breach stories:
The risks were clear to computer experts inside $organization: The organization, they warned for years, might be easy prey for hackers.
But despite alarms as far back as 2008, $organization was slow to raise its defenses, according to former employees.
The particular quote is from “Ex-Employees Say Home Depot Left Data Vulnerable,” but you can find similar statements about healthcare.gov, Target, and most other breaches. It’s worth taking apart these claims a little bit, and asking what we can do about them.
This is a longish blog post, so the summary is: these claims are true, irrelevant, and a distraction from engineering more secure organizations.
I told you so?
First, these claims are true. Doubtless, in every organization of any size, there were people advocating all sorts of improvements, not all of which were funded. Employees who weren’t successful at driving effective change complain that “when they sought new software and training, managers came back with the same response: ‘We sell hammers.’” The “I told you so” isn’t limited to employees, there’s a long list of experts who are willing to wax philisophic about the mote in their neighbors eyes. This often comes in the form of “of course you should have done X.” For example, the Home Depot article includes a quote from Gartner, “Scanning is the easiest part of compliance…There are a lot of services that do this. They hardly cost any money.” I’ll get to that claim later in this article. First, let’s consider the budget items actually enumerated in the article.
In the New York Times article on Home Depot, I see at least four programs listed as if they’re trivial, cheap, and would have prevented the breach:
- Threat intelligence
- Continuous (network) anomaly detection
- Vulnerability scanning
Let’s discuss each in turn.
(1) The claims that even modern, updated anti-virus is trivially bypassed by malware employed by criminals are so common I’m not going to look for a link.
(2) Threat intelligence (and “sharing”) usually means a feed of “observables” or “indicators of compromise.” These usually include hashes of files dropped by intruders, IP addresses and domain names for the “command and control” servers or fake emails which are sent, either containing an exploit, a trojan horse, or a phishing link. This can be useful if your attackers don’t bother to change such things between attacks. The current state of these feeds and their use is such that many attackers don’t really bother to make such changes.
(See also my previous comments on “Don’t share, publish:” we spend so much time on rules for sharing that we don’t share.) However, before saying “everyone should sign up for such services,” “they’ll be a silver bullet,” we should consider what the attackers will do, which is to buy the polymorphism services that more common malware has been using for years. So it is unlikely that threat intelligence would prevent this breach.
(3) Continuous anomaly detection generally only works if you have excellent change management processes, careful network segmentation, and a mostly static business environment. In almost any real network, the level of alarms from such systems are high, and the value of the alarms, incredibly low. (This is a result of the organizations making the systems not wanting to be accused of negligence because their system didn’t “sound the alarm,” and so they alarm on everything.) Most organizations who field such things end up ignoring the alarms, dropping the maintenance contracts, and leaving the systems in place to satisfy the PCI auditors.
(4) Vulnerability scanning may be cheap, but like anomaly detectors, they are motivated to “sound the alarm” on everything. Most alarms are not push-button remediation. Even if that feature is offered, there’s a need to test the remediation to see if it breaks anything, to queue it in the aforementioned change management, and to work across some team boundary so the operations team takes action. None of which falls under the rhetoric of “hardly cost any money.”
The Key Question: How to do better?
Any organization exists to deliver something, and usually that something is not cyber security. In Home Depot’s case, they exist to deliver building supplies at low cost. So how should Home Depot’s management make decisions about where to invest?
Security decisions are like a lot of other decisions in business. There’s insufficient information, people who may be acting deceitful, and the stakes are high. But unlike a lot of other decisions, figuring out if you made the right one is hard. Managers make a lot of decisions, and the relationship between those decisions and the security outcomes is hard to understand.
The trouble is, in security, we like to hide our decisions, and hide the outcomes of our decisions. As a result, we never learn. And employees keep saying “I told you so” about controls that may or may not help. As I said at BSides Las Vegas, companies are already paying the PR price, they need to move to talking about what happened.
With that information, we can do better at evaluating controls, and moving from opinions about what works (with the attendant “I told you so”) to evidence about effective investments in security.
by adam on September 29, 2014
My thanks to the judges, most especially to Gastón Hillar for the constructive criticism that “Unluckily, the author has chosen to focus on modeling and didn’t include code samples in the book. Code samples would have been very useful to make the subject clearer for developers who must imagine in their own lines of code how some of the attacks are performed.” He also says “Overall, this is an excellent volume that should be examined by most developers concerned with issues of security.” The full review is at “Jolt Finalist: Threat Modeling.”
by adam on August 27, 2014
All through the week of BSides/BlackHat/Defcon, people came up to me to tell me that they enjoyed my BSides Las Vegas talk. (Slides, video). It got some press coverage, including an article by Jon Evans of TechCrunch, “Notes From Crazytown, Day One: The Business Of Fear.” Mr. Evans raises an interesting point: “the computer security industry is not in the business of openness, it is in the business of fear.” I’ve learned to be happy when people surface reasons that what I’m saying won’t work, because it gives us an opportunity to consider and overcome those objections.
At one level, Mr Evans is correct. Much of the computer security industry is in the business of fear. There are lots of incentives for the industry, many of which take us in wrong directions. (Mr. Evans acknowledges the role of the press in this; I appreciate his forthrightness, and don’t know what to do about that aspect of the problem, beyond pointing out that “company breached” is on its unfortunate way to being the new “dog bites man.”)
But I wasn’t actually asking the industry to change. I was asking professionals to change. And while that may appear to be splitting hairs, there’s an important reason that I ask people to consider issues of efficacy and burnout. That reason is I think we can incentivize people to act in their own long term interest. It’s challenging, and that’s why, after talking to behavior change specialists, I chose to use a set of commitment devices to get people to commit to pushing organizations to disclose more.
I want to be super-clear on my request, because based on feedback, not everyone understood my request. I do not expect everyone who tries to succeed. [#include Yoda joke.] All I’m asking people to do is to push their organizations to do better. To share root cause analyses. Because if we do that, we will make things better.
It’s not about the industry, it’s about the participants. It’s about you and me. And we can do better, and in doing so, we can make things better.
by adam on July 10, 2014
Gabrielle Gianelli has pulled back the curtain on how Etsy threat modeled a new marketing campaign. (“Threat Modeling for Marketing Campaigns.”)
I’m really happy to see this post, and the approach that they’ve taken:
First, we wanted to make our program sustainable through proactive defenses. When we designed the program we tried to bake in rules to make the program less attractive to attackers. However, we didn’t want these rules to introduce roadblocks in the product that made the program less valuable from users’ perspectives, or financially unsustainable from a business perspective.
Gabrielle apologizes several times for not giving more specifics, eg:
I have to admit upfront, I’m being a little ambiguous about what we’ve actually implemented, but I believe it doesn’t really matter since each situation will differ in the particulars.
I think this is almost exactly right. I could probably tell you about the specifics of the inputs into the machine learning algorithms they’re probably using. Not because I’m under NDA to Etsy (I’m not), but because such specifics have a great deal of commonality. More importantly, and here’s where I differ, I believe you don’t want to know those specifics. Those specifics would be very likely to distract you from going from a model (Etsy’s is a good one) to the specifics of your situation. So I would encourage Etsy to keep blogging like this, and to realize they’re at a great level of abstraction.
So go read Threat Modeling for Marketing Campaigns
by adam on June 11, 2014
Stefan Larson talks about “What doctors can learn from each other:”
Different hospitals produce different results on different procedures. Only, patients don’t know that data, making choosing a surgeon a high-stakes guessing game. Stefan Larsson looks at what happens when doctors measure and share their outcomes on hip replacement surgery, for example, to see which techniques are proving the most effective. Could health care get better — and cheaper — if doctors learn from each other in a continuous feedback loop? (Filmed at TED@BCG.)
Measuring and sharing outcomes of procedures? I’m sure our anti-virus software makes that unnecessary.
But you should watch the talk anyway — maybe someday you’ll need a new hip, and you’ll want to be able to confidently question the doctors draining you of evil humors.