PCI & the 166816 password

by adam on June 22, 2015

This was a story back around RSA, but I missed it until RSnake brought it up on Twitter: “[A default password] can hack nearly every credit card machine in the country.” The simple version is that Charles Henderson of Trustwave found that “90% of the terminals of this brand we test for the first time still have this code.” (Slide 30 of RSA deck.) Wow.

Now, I’m not a fan of the “ha-ha in hindsight” or “that’s security 101!” responses to issues. In fact, I railed against it in a blog post in January, “Security 101: Show Your List!

But here’s the thing. Credit card processors have a list. It’s the Payment Card Industry Data Security Standard. That standard is imposed, contractually, on everyone who processes payment cards through the big card networks. In version 3, requirement 2 is “Do not use vendor-supplied defaults for system passwords.” This is not an obscure sub-bullet. As far as I can tell, it is not a nuanced interpretation, or even an interpretation at all. In fact, testing procedure 2.1.a of v3 of the standard says:

2.1.a Choose a sample of system components, and attempt to
log on (with system administrator help) to the devices and
applications using default vendor-supplied accounts and
passwords, to verify that ALL default passwords (including
those on … POS terminals…) have been changed. [I’ve elided a few elements of the list for clarity.]

Now, the small merchant may not be aware that their terminal has a passcode. They may have paid someone else to set it up. But shouldn’t that person have set it up properly? The issue is not that the passcodes are not reset, the issue that I’m worried about is that the system appears broken. We appear to have evidence that to get security right, the system requires activity by busy, possibly undertrained people. Why is that still required ten years into PCI?

This isn’t a matter of “checklists replacing security.” I have in the past railed against checklists (including in the book this blog is named after). But after I read “The Checklist Manifesto”, I’ve moderated my views a bit, such as in “Checklists and Information Security.” Here we have an example of exactly what checklists are good for: avoiding common, easy-to-make and easy-to-check mistakes.

When I raised some of these questions on Twitter someone said that the usual interpretation is that the site selects the sample (where they lack confidence). And to an extent, that’s understandable, and I’m working very hard avoid hindsight bias here. But I think it’s not hindsight bias to say that a sample should be a random sample unless there’s a very strong reason to choose otherwise. I think it’s not hindsight bias to note that your financial auditors don’t let you select which transactions are audited.

Someone else pointed out that it’s “first time audits” which is ok, but why, a decade after PCI 1.0, is this still an issue? Shouldn’t the vendor have addressed this by now? Admittedly, may be hard to manage device PINs at scale — if you’re Target with tens of thousands of PIN pads, and your techs need access to the PINs, what do you do to ensure that the tech has access while at the register, but not after they’ve quit? But even given such challenges, shouldn’t the overall payment card security system be forcing a fix to such issues?

All bellyaching and sarcastic commentary aside, if the PCI process isn’t catching this, what does that tell us? Serious question. I have ideas, but I’m really curious as to what readers think. Please keep it professional.

Threat Modeling Crypto Back Doors

by adam on May 19, 2015

Today, the Open Technology Institute released an open letter to the President of the United States from a broad set of organizations and experts, and I’m pleased to be a signer, and agree wholeheartedly with the text of the letter. (Some press coverage.)

I did want to pile on with an excerpt from chapter 9 of Threat Modeling: Designing for Security:

For another example of comparative threat modeling, consider the two systems shown in Figures 9-2 and 9-3. Figure 9-2 depicts an e-mail system, and Figure 9-3 is a version of 9-2 with a “lawful intercept” module added. (“Lawful intercept” is an Orwellian phrase for “thing which allows people to bypass the security features of your system.” Setting aside any arguments of “should we as a society have such a mechanism?” it’s possible to assess the technical security implications of adding such mechanisms.)

?It should be obvious that Figure 9-2 is more secure than Figure 9-3. Using software-centric modeling, Figure 9-3 adds two data flows and a process; thus, by STRIDE-per-element, it has an additional 12 threats (tampering, information disclosure, DoS with each flow, for 6; and the six S,T,R, I, D, and E threats against the process for a total of 12). Additionally, Figure 9-3 has two apparent groupings of elevation-of-privilege threats: those posed by outsiders and those posed by software-allowed, but human-policy-violating, use. Thus, if Figure 9-2
has a list of threats (1…n), then Figure 9-3 has a list of threats (1…n+14).

A Lawful Access Threat Model

If instead of software-centric modeling you use attacker-centered modeling on the systems shown in Figures 9-2 and 9-3, you find two sets of threats: First, each law enforcement agency that is authorized to connect adds its employees and IT systems as possible threats, and possible threat vectors. Second, attackers are likely to attack these features of the system to abuse them. The 2010 “Aurora” attacks on Google and others allegedly did exactly this (McMillan, 2010, and Adida, 2013). Thus, by comparing them you can see that the addition of these features creates additional risk. You might also wonder where those risks fall, but that’s outside the scope of this example.

More subtly, the addition of the code in Figure 9-3 is an obvious source of security vulnerabilities. As such, it may draw attention and possibly effort away from the rest of the system. Thus, the components that comprise Figure 9-2 are likely to be less secure, even ignoring the threats to the additional components. In the same vein, the requests and implementations for such back- doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to
bypass other security controls, any weaknesses are likely to have outsized impact.

The technical arguments are simple. All other things being equal, systems with backdoors are unavoidably less secure, and probably worse than that. American companies cannot be competitive if the government forces us to add them, making products produced here less secure than those produced elsewhere.

The New Cyber Agency Will Likely Cyber Fail

by adam on February 10, 2015

The Washington Post reports that there will be a “New agency to sniff out threats in cyberspace.” This is my first analysis of what’s been made public.

Details are not fully released, but there are some obvious problems, which include:

  • “The quality of the threat analysis will depend on a steady stream of data from the private sector” which continues to not want to send data to the Feds.
  • The agency is based in the Office of the Director of National Intelligence. The world outside the US is concerned that the US spies on them, which means that the new center will get minimal cooperation from any company which does business outside the US.
  • There will be privacy concerns about US citizen information, much like there was with the NCTC. For example, here.
  • The agency is modeled on the National Counter Terrorism Center. See “Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists (2007)“. A new agency certainly has upwards of three years to get rolling, because that will totally help.
  • The President continues to ask the wrong questions of the wrong people. (“President Obama wanted to know the details. What was the impact? Who was behind it? Monaco called meetings of the key agencies involved in the investigation, including the FBI, the NSA and the CIA.” But not the private sector investigators who were analyzing the hard drives and the logs?)

It’s all well and good to stab, but perhaps more useful would be some positive contributions. I have been doing my best to make those contributions.

I sent a letter to the Data.gov folks back in 2009, asking for more transparency. Similarly, I sent an open letter to the new cyber-czar.

The suggestions there have not been acted apon. Rather than re-iterate them, I believe there are human reasons why that’s the case, and so in 2013, asked the Royal Society to look into reasons that calls for an NTSB-like function have failed as part of their research vision for the UK.

Cyber continues to suck. Maybe it’s time to try openness, rather than a new secret agency secretly doing analysis of who’s behind the attacks, rather than why they succeed, or why our defenses aren’t working. If we can’t get to openness, and apparently we cannot, we should look at the reasons why. We should inventory them, including shame, liability fears, customers fleeing and assess their accuracy and predictive value. We should invest in a research program that helps us understand them and address them so we can get to a proper investigative approach to why cyber is failing, and only then will we be able to do anything about it.

Until then, keep moving those deck chairs.

What CSOs can Learn from Pete Carroll

by adam on February 6, 2015

Pete Carroll

If you listen to the security echo chamber, after an embarrassing failure like a data breach, you lose your job, right?

Let’s look at Seahawks Coach Pete Carroll, who made what the home town paper called the “Worst Play Call Ever.” With less than a minute to go in the Superbowl, and the game hanging in the balance, the Seahawks passed. It was intercepted, and…game over.

Breach Lose the Super-Bowl
Publicity News stories, letters Half of America watches the game
Headline Another data breach Worst Call Play Ever
Cost $187 per record! Tens of millions in sponsorship
Public response Guessing, not analysis Monday morning quarterbacking*
Outcome CSO loses job Pete Caroll remains employed

So what can the CSO learn from Pete Carroll?

First and foremost, have back to back winning seasons. Since you don’t have seasons, you’ll need to do something else that builds executive confidence in your decision making. (Nothing builds confidence like success.)

Second, you don’t need perfect success, you need successful prediction and follow-through. Gunnar Peterson has a great post about the security VP winning battles. As you start winning battles, you also need to predict what will happen. “My team will find 5-10 really important issues, and fixing them pre-ship will save us a mountain of technical debt and emergency events.” Pete Carroll had that—a system that worked.

Executives know that stuff happens. The best laid plans…no plan ever survives contact with the enemy. But if you routinely say things like “one vuln, and it’s over, we’re pwned!” or “a breach here could sink the company!” you lose any credibility you might have. Real execs expect problems to materialize.

Lastly, after what had to be an excruciating call, he took the conversation to next year, to building the team’s confidence, and not dwelling on the past.

What Pete Carroll has is a record of delivering what executives wanted, rather than delivering excuses, hyperbole, or jargon. Do you have that record?

* Admittedly, it started about 5 seconds after the play, and come on, how could I pass that up? (Ahem)
† I’m aware of the gotcha risk here. I wrote this the day after Sony Pictures Chairman Amy Pascal was shuffled off to a new studio.

Security 101: Show Your List!

by adam on January 5, 2015

Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”

Now, I’m unsure if that’s “security 101″ or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.


What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.

This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.

So I’m going to make three requests for 2015:

  • If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
  • If you’re a reporter and someone tells you “X is security 101″ please ask them for their list.
  • Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.

Oh, and since it’s sauce for the gander, here’s my list for individuals:

  • Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
  • Use a firewall that blocks most inbound traffic.
  • Ensure you have a working backup of your data.

(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)


*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.

[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]

Security Lessons from Drug Trials

by adam on December 15, 2014

When people don’t take their drugs as prescribed, it’s for very human reasons.

Typically they can’t tolerate the side effects, the cost is too high, they don’t perceive any benefit, or they’re just too much hassle.

Put these very human (and very subjective) reasons together, and they create a problem that medicine refers to as non-adherence. It’s an awkward term that describes a daunting problem: about 50% of people don’t take their drugs as prescribed, and this creates some huge downstream costs. Depending how you count it, non-adherence racks up between $100 billion and $280 billion in extra costs – largely due to a condition worsening and leading to more expensive treatments down the line.

So writes “Getting People To Take Their Medicine.” But Thomas Goetz is not simply griping about the problem, he’s presenting a study of ways to address it.

That’s important because we in information security also ask people to do things, from updating their software to trusting certain pixels and not other visually identical pixels, and they don’t do those things, also for very human reasons.

His conclusion applies almost verbatim to information security:

So we took especial interest in the researcher’s final conclusion: “It is essential that researchers stop re-inventing the poorly performing ‘wheels’ of adherence interventions.” We couldn’t agree more. It’s time to stop approaching adherence as a clinical problem, and start engaging with it as a human problem, one that happens to real people in their real lives. It’s time to find new ways to connect with people’s experiences and frustrations, and to give them new tools that might help them take what the doctor ordered.

If only information security’s prescriptions were backed by experiments as rigorous as clinical trials.

(I’ve previously shared Thomas Goetz’s work in “Fear, Information Security, and a TED Talk“)

Threat Modeling At a Startup

by adam on December 1, 2014

I’ve been threat modeling for a long time, and at Microsoft, had the lovely opportunity to put some rigor into not only threat modeling, but into threat modeling in a consistent, predictable, repeatable way. Because I did that work at Microsoft, sometimes people question how it would work for a startup, and I want to address that from having done it. I’ve done threat modeling for startups like Zero Knowledge (some of the output is in the Freedom security whitepaper), and while a consultant or advisor, for other startups who decided against the open kimono approach.

Those threat modeling sessions have taken anywhere from a few hours to a few days, and the same techniques that we developed to make threat modeling more effective can be used to make it more efficient. One key aspect is to realize that what security folks call threat modeling is comprised of several related tasks, and each of them can be done in different ways. Even more important, only one of the steps is purely focused on threats.

I’ve seen a four-question framework work at all sorts of organization. The four questions are: “what are we building/deploying,” “what could go wrong”, “what are we going to do about it” and “did we do an acceptable job at this?”

The “what are we building” question is about creating a shared model of the system. This can be done on a napkin or a whiteboard, or for a large complex system which has grown organically, can involve people working for months. In each case, there’s value beyond security. Your code is cleaner, more consistent, more maintainable, and sometimes even faster when everyone agrees on what’s running where. Discussing trust boundaries can be quick and easy, and understanding them can lead to better marshaling and fewer remote calls. Of course, if your code is spaghetti and no one’s ever bothered to whiteboard it, this step can involve a lot of discussion and debate, because it will lead to refactoring. Maybe you can launch without that refactoring, maybe it’s important to do it now. It can be hard to tell if no one knows how your system is put together.

Understanding “what could go wrong” can happen in as little as an hour for people new to security using the Elevation of Privilege game. You should go breadth-first here, getting an understanding of where the threats might cluster, and if you need to dig deeper.

What are you going to do about it? You’re going to track issues as bugs. (Startups should not spend energy thinking about bugs and flaws as two separate things with their own databases.) If your bugs are post-its, that’s great. If you’re using a database, fine. Just track the issues. Some of them will get turned into tests. If you’re using Test-Driven Development, threat modeling is a great way to help you think about security testing and where it needs to focus. (If you’re not using TDD, it’s still a great way to focus your security testing.)

Total time spent for a startup can be as low as a few hours, especially if you’ve been thinking about software architecture and boundaries along the way. I’ve never spent more than a few days on threat modeling with a small company, and never heard it called a poor use of time. Of course, it’s easy to say that each thing that someone wants a startup to do “only takes a little time,” but all those things can add up to a substantial distraction from the core product. However, a media blow-up or a Federal investigation can also distract you, and distract you for a lot longer. A few hours of threat modeling can help you avoid the sorts of problems that distracted Twitter or Snapchat. The landscape continues to change rapidly post-Snowden, and threat modeling is the fastest way to consider what security investments a startup should be making.

[Update, Feb 5: David Cowan has a complimentary document, “Security for Startups in 10 Steps. He provides context and background on Techcrunch.]

Usable Security: History, Themes, and Challenges (Book Review)

by adam on November 17, 2014

Simson Garfinkel and Heather Lipford’s Usable Security: History, Themes, and Challenges should be on the shelf of anyone who is developing software that asks people to make decisions about computer security.

We have to ask people to make decisions because they have information that the computer doesn’t. My favorite example is the Windows “new network” dialog, which asks what sort of network you’re connecting to..work, home or coffee shop. The information is used to configure the firewall. My least favorite example is phishing, where people are asked to make decisions about technical minutiae before authenticating. Regardless, we are not going to entirely remove the need for people to make decisions about computer security. So we can either learn to gain their participation in more effective ways, or we can accept a very high failure rate. The former option is better, and this book is a substantial contribution.

It’s common for designers to throw up their hands at these challenges, saying things like “given a choice between security and dancing babies, people will choose dancing babies every time,” or “you can’t patch human stupidity.” However, in a recently published study by Google and UCSD, they found that the best sites only fooled 45% of the people who clicked through, while overall only 13% did. (There’s a good summary of that study available.) Claiming that “people will choose dancing babies 13% of the time” just doesn’t seem like a compelling argument against trying.

This slim book is a review of the academic work that’s been published, almost entirely in the last 20 years, on how people interact with information security systems. It summarizes and contextualizes the many things we’ve learned, mistakes that have been made, and does so in a readable and concise way. The book has six chapters:

  • Intro
  • A brief history
  • Major Themes in UPS Academic Research
  • Lessons Learned
  • Research Challenges
  • Conclusion/The Next Ten Years

The “Major themes” chapter is 61 or so pages, which is over half of the 108 pages of content. (The book also has 40 pages of bibliography). Major themes include authentication, email security and PKI, anti-phishing, storage, device pairing, web privacy, policy specification, mobile, social media and security administration.

The “Lessons Learned” chapter is quite solid, covering “reduce decisions,” “safe and secure defaults,” “provide users with better information, not more information,” “users require clear context to make good decisions,” “information presentation is critical” and “education works but has limits.” I have a quibble, which is Sasse’s concept of mental ‘compliance budgets’ is also important, and I wish it were given greater prominence. (My other quibble is more of a pet peeve: the term “user” where “people” would serve. Isn’t it nicer to say “people require clear context to make good decisions”?) Neither quibble should take away from my key message, which is that this is an important new book.

The slim nature of the book is, I believe, an excellent usability property. The authors present what’s been done, lessons that they feel can be taken away, and move to the next topic. This lets you the reader design, build or deploy systems which help the person behind the keyboard make the decisions they want to make. To re-iterate, anyone building software that asks people to make decisions should read the lessons contained within.

Disclaimer: I was paid to review a draft of this book, and my name is mentioned kindly in the acknowledgements. I am not being paid to write or post reviews.

[Updated to correct the sentence about the last 20 years.]

Modeling Attackers and Their Motives

by adam on November 11, 2014

There are a number of reports out recently, breathlessly presenting their analysis of one threatening group of baddies or another. You should look at the reports for facts you can use to assess your systems, such as filenames, hashes and IP addresses. Most readers should, at most, skim their analysis of the perpetrators. Read on for why.

There are a number of surface reasons that you might reject or ignore these reports. For example, these reports are funded by marketing. Even if they are, that’s not a reason to reject them. The baker does not bake bread for fun, and the business goal of marketing can give us useful information. You might reject them for their abuse of adjectives like “persistent”, “stealthy”, or “sophisticated.” (I’m tempted to just compile a wordcloud and drop it in place of writing.) No, the reason to only skim these is what the analysis does to the chance of your success. There are two self-inflicted wounds that often happen when people focus on attackers:

  • You miss attackers
  • You misunderstand what the attackers will do

You may get a vicarious thrill from knowing who might be attacking you, but that very vicarious thrill is likely to make those details available to your conscious mind, or anchor your attention on them, causing you to miss other attackers. Similarly, you might get attached to the details of how they attacked last year, and not notice how those details change.

Now, you might think that your analysis won’t fall into those traps, but let me be clear: the largest, best-funded analysis shops in the world routinely make serious and consequential mistakes about their key areas of responsibility. The CIA didn’t predict the collapse of the Soviet Union, and it didn’t predict the rise of ISIS.

If your organization believes that it’s better at intelligence analysis than the thousands of people who work in US intelligence, then please pay attention to my raised eyebrow. Maybe you should be applying that analytic awesomesauce to your core business, maybe it is your core business, or maybe you should be carefully combing through the reports and analysis to update your assessments of where these rapscallions shall strike next. Or maybe you’re over-estimating your analytic capabilities.

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

The indicators that can help you find the successful attacks are an important value from these reports, and that’s what you should use them for. Don’t get distracted by the motivations.

Employees Say Company Left Data Vulnerable

by adam on October 7, 2014

There’s a recurring theme in data breach stories:

The risks were clear to computer experts inside $organization: The organization, they warned for years, might be easy prey for hackers.

But despite alarms as far back as 2008, $organization was slow to raise its defenses, according to former employees.

The particular quote is from “Ex-Employees Say Home Depot Left Data Vulnerable,” but you can find similar statements about healthcare.gov, Target, and most other breaches. It’s worth taking apart these claims a little bit, and asking what we can do about them.

This is a longish blog post, so the summary is: these claims are true, irrelevant, and a distraction from engineering more secure organizations.

I told you so?

First, these claims are true. Doubtless, in every organization of any size, there were people advocating all sorts of improvements, not all of which were funded. Employees who weren’t successful at driving effective change complain that “when they sought new software and training, managers came back with the same response: ‘We sell hammers.’” The “I told you so” isn’t limited to employees, there’s a long list of experts who are willing to wax philisophic about the mote in their neighbors eyes. This often comes in the form of “of course you should have done X.” For example, the Home Depot article includes a quote from Gartner, “Scanning is the easiest part of compliance…There are a lot of services that do this. They hardly cost any money.” I’ll get to that claim later in this article. First, let’s consider the budget items actually enumerated in the article.

Potential Spending

In the New York Times article on Home Depot, I see at least four programs listed as if they’re trivial, cheap, and would have prevented the breach:

  1. Anti-virus
  2. Threat intelligence
  3. Continuous (network) anomaly detection
  4. Vulnerability scanning

Let’s discuss each in turn.

(1) The claims that even modern, updated anti-virus is trivially bypassed by malware employed by criminals are so common I’m not going to look for a link.

(2) Threat intelligence (and “sharing”) usually means a feed of “observables” or “indicators of compromise.” These usually include hashes of files dropped by intruders, IP addresses and domain names for the “command and control” servers or fake emails which are sent, either containing an exploit, a trojan horse, or a phishing link. This can be useful if your attackers don’t bother to change such things between attacks. The current state of these feeds and their use is such that many attackers don’t really bother to make such changes.
(See also my previous comments on “Don’t share, publish:” we spend so much time on rules for sharing that we don’t share.) However, before saying “everyone should sign up for such services,” “they’ll be a silver bullet,” we should consider what the attackers will do, which is to buy the polymorphism services that more common malware has been using for years. So it is unlikely that threat intelligence would prevent this breach.

(3) Continuous anomaly detection generally only works if you have excellent change management processes, careful network segmentation, and a mostly static business environment. In almost any real network, the level of alarms from such systems are high, and the value of the alarms, incredibly low. (This is a result of the organizations making the systems not wanting to be accused of negligence because their system didn’t “sound the alarm,” and so they alarm on everything.) Most organizations who field such things end up ignoring the alarms, dropping the maintenance contracts, and leaving the systems in place to satisfy the PCI auditors.

(4) Vulnerability scanning may be cheap, but like anomaly detectors, they are motivated to “sound the alarm” on everything. Most alarms are not push-button remediation. Even if that feature is offered, there’s a need to test the remediation to see if it breaks anything, to queue it in the aforementioned change management, and to work across some team boundary so the operations team takes action. None of which falls under the rhetoric of “hardly cost any money.”

The Key Question: How to do better?

Any organization exists to deliver something, and usually that something is not cyber security. In Home Depot’s case, they exist to deliver building supplies at low cost. So how should Home Depot’s management make decisions about where to invest?


Security decisions are like a lot of other decisions in business. There’s insufficient information, people who may be acting deceitful, and the stakes are high. But unlike a lot of other decisions, figuring out if you made the right one is hard. Managers make a lot of decisions, and the relationship between those decisions and the security outcomes is hard to understand.

The trouble is, in security, we like to hide our decisions, and hide the outcomes of our decisions. As a result, we never learn. And employees keep saying “I told you so” about controls that may or may not help. As I said at BSides Las Vegas, companies are already paying the PR price, they need to move to talking about what happened.

With that information, we can do better at evaluating controls, and moving from opinions about what works (with the attendant “I told you so”) to evidence about effective investments in security.