A New Way to Tie Security to Business

by adam on June 20, 2016

A woman with a chart saying that's ok, i don't know what it means either

As security professionals, sometimes the advice we get is to think about the security controls we deploy as some mix of “cloud access security brokerage” and “user and entity behavioral analytics” and “next generation endpoint protection.” We’re also supposed to “hunt”, “comply,” and ensure people have had their “awareness” raised. Or perhaps they mean “training,” but how people are expected to act post-training is often maddeningly vague, or worse, unachievable. Frankly, I have trouble making sense of it, and that’s before I read about how your new innovative revolutionary disruptive approach is easy to deploy to ensure that APT can’t get into my network to cloud my vision.

I’m making a little bit of a joke, because otherwise it’s a bit painful to talk about.

Really, we communicate badly. It hurts our ability to drive change to protect our organizations.

A CEO once explained his view of cyber. He said “security folks always jump directly into details that just aren’t important to me. It’s as if I met a financial planner and he started babbling about a mutual fund’s beta before he understood what my family needed.” It stuck with me. Executives are generally smart people with a lot on their plates, and they want us, as security leaders, to make ourselves understood.

I’ve been heads down with a small team, building a new kind of risk management software. It’s designed to improve executive communication. Our first customers are excited and finding that it’s changing the way they engage with their management teams. Right now, we’re looking for a few more forward-looking organizations that want to improve their security, allocate their resources better and link what they’re doing to what the business needs.

If you’re a leader at such a company, please send me an email [first]@[last].org, leave a comment or reach out via linkedin.

Sneak peeks at my new startup at RSA

by adam on February 18, 2016

Confusion

Many executives have been trying to solve the problem of connecting security to the business, and we’re excited about what we’re building to serve this important and unmet need. If you present security with an image like the one above, we may be able to help.

My new startup is getting ready to show our product to friends at RSA. We’re building tools for enterprise leaders to manage their security portfolios. What does that mean? By analogy, if you talk to a financial advisor, they have tools to help you see your total financial picture: assets and debts. They’ll help you break out assets into long term (like a home) or liquid investments (like stocks and bonds) and then further contextualize each as part of your portfolio. There hasn’t been an easy way to model and manage a portfolio of control investments, and we’re building the first.

If you’re interested, we have a few slots remaining for meetings in our suite at RSA! Drop me a line at [first]@[last].org, in a comment or reach out over linkedin.

Threat Modeling: Chinese Edition

by adam on February 1, 2016

Threatmodeling chinese
I’m excited to say that Threat Modeling: Designing for Security is now available in Chinese.

This is a pretty exciting milestone for me — it’s my first book translation, and it joins Elevation of Privilege as my second translation into Chinese.

You can buy it from Amazon.cn.

Seeking a technical leader for my new company

by adam on July 30, 2015

We have a new way to measure security effectiveness, and want someone who’ll drive to delivering the technology to customers, while building a great place for developers to ship and deploy important technology. We are very early in the building of the company. The right person will understand such a “green field” represents both opportunity and that we’ll have to build infrastructure as we grow.

This person might be a CTO, they might be a Chief Architect. They are certainly an experienced leader with strong references from peers, management and reports.

Lead on:

  • Product development & delivery including service, platform, statistics
  • Technology platform decisions (full stack from OS to UI frameworks)
  • Product quality including security & reliability
  • Technology process & culture which values effective collaboration
  • Tech hiring aligned with budget

Qualifications:

  • Early startup leadership experience (must have early experience, for example one of the first 50 employees)
  • Shipped at least one product from concept stage (must)
  • Big data/data science product/service experience (must)
  • Outstanding communication skills – both technical and interpersonal (must)
  • Gets hands dirty
  • UI & Design thinking (should)
  • Seattle location is ideal

We’re an equal opportunity employer, and welcome applicants from diverse backgrounds, except rock stars. We want great folks with humility, empathy and a desire to learn and grow as they help our customers measure security.

If you’re interested, please get in touch with me at first@last.org.

Improving Security Effectiveness

by adam on July 16, 2015

For the last few months, I’ve been working full time and talking with colleagues about a new way for security executives to measure the effectiveness of security programs. In very important ways, the ideas are new and non-obvious, and at the same time, they’re an evolution of the ideas that Andrew and I wrote about in the New School book that inspired this blog.

I’m super-excited by what I’ve learned. I’m looking to grow the team and talk with security executives at large organizations, and so I’m saying a little more, but not “launching” or sharing a lot of details. This is less about ‘stealth mode’ and more about my desire to say factual and interesting things.


I’m familiar with Chris Dixon’s argument that “you shouldn’t keep your startup idea secret,” along with Tren Griffin’s12 Things I Learned from Chris Dixon about Startups.” There’s also a bit of advice from The Lean Startup:

In fact, I have often given entrepreneurs fearful of this issue the following assignment: take one of your ideas (one of your lesser insights, perhaps), find the name of the relevant product manager at an established company who has responsibility for that area, and try to get that company to steal your idea. Call them up, write them a memo, send them a press release—go ahead, try it. The truth is that most managers in most companies are already overwhelmed with good ideas. Their challenge lies in prioritization and execution, and it is those challenges that give a startup hope of surviving.

So why I am I keeping this quiet? Startup L Jackson said it best:

Their big issue was that they were Silicon Valley famous before product-market fit, which meant the failure was public and the hype cycle inescapable. That may have been their fault, may not have been. Journalists can sort that out. Other founders: don’t ever do this.

So I’d like to avoid a hype cycle, and so what I’ve been doing and plan to continue doing is spending time with customers, refining a product plan, building a team to deliver on that plan, and really delivering it before spending time on marketing or press. When we can say “our customers use our products to …” seems like a good time to say a lot more.

In the meanwhile, if you’re interested in building a high-scale service to gather and analyze data to deliver actionable advice building on a new set of models, I’d love to talk to you.

If you’d like to stay informed, the best way is to sign up for my very low volume “Adam Shostack’s New Thing” mail list.

PCI & the 166816 password

by adam on June 22, 2015

This was a story back around RSA, but I missed it until RSnake brought it up on Twitter: “[A default password] can hack nearly every credit card machine in the country.” The simple version is that Charles Henderson of Trustwave found that “90% of the terminals of this brand we test for the first time still have this code.” (Slide 30 of RSA deck.) Wow.

Now, I’m not a fan of the “ha-ha in hindsight” or “that’s security 101!” responses to issues. In fact, I railed against it in a blog post in January, “Security 101: Show Your List!

But here’s the thing. Credit card processors have a list. It’s the Payment Card Industry Data Security Standard. That standard is imposed, contractually, on everyone who processes payment cards through the big card networks. In version 3, requirement 2 is “Do not use vendor-supplied defaults for system passwords.” This is not an obscure sub-bullet. As far as I can tell, it is not a nuanced interpretation, or even an interpretation at all. In fact, testing procedure 2.1.a of v3 of the standard says:

2.1.a Choose a sample of system components, and attempt to
log on (with system administrator help) to the devices and
applications using default vendor-supplied accounts and
passwords, to verify that ALL default passwords (including
those on … POS terminals…) have been changed. [I’ve elided a few elements of the list for clarity.]

Now, the small merchant may not be aware that their terminal has a passcode. They may have paid someone else to set it up. But shouldn’t that person have set it up properly? The issue is not that the passcodes are not reset, the issue that I’m worried about is that the system appears broken. We appear to have evidence that to get security right, the system requires activity by busy, possibly undertrained people. Why is that still required ten years into PCI?

This isn’t a matter of “checklists replacing security.” I have in the past railed against checklists (including in the book this blog is named after). But after I read “The Checklist Manifesto”, I’ve moderated my views a bit, such as in “Checklists and Information Security.” Here we have an example of exactly what checklists are good for: avoiding common, easy-to-make and easy-to-check mistakes.

When I raised some of these questions on Twitter someone said that the usual interpretation is that the site selects the sample (where they lack confidence). And to an extent, that’s understandable, and I’m working very hard avoid hindsight bias here. But I think it’s not hindsight bias to say that a sample should be a random sample unless there’s a very strong reason to choose otherwise. I think it’s not hindsight bias to note that your financial auditors don’t let you select which transactions are audited.

Someone else pointed out that it’s “first time audits” which is ok, but why, a decade after PCI 1.0, is this still an issue? Shouldn’t the vendor have addressed this by now? Admittedly, may be hard to manage device PINs at scale — if you’re Target with tens of thousands of PIN pads, and your techs need access to the PINs, what do you do to ensure that the tech has access while at the register, but not after they’ve quit? But even given such challenges, shouldn’t the overall payment card security system be forcing a fix to such issues?

All bellyaching and sarcastic commentary aside, if the PCI process isn’t catching this, what does that tell us? Serious question. I have ideas, but I’m really curious as to what readers think. Please keep it professional.

Threat Modeling Crypto Back Doors

by adam on May 19, 2015

Today, the Open Technology Institute released an open letter to the President of the United States from a broad set of organizations and experts, and I’m pleased to be a signer, and agree wholeheartedly with the text of the letter. (Some press coverage.)

I did want to pile on with an excerpt from chapter 9 of Threat Modeling: Designing for Security:

For another example of comparative threat modeling, consider the two systems shown in Figures 9-2 and 9-3. Figure 9-2 depicts an e-mail system, and Figure 9-3 is a version of 9-2 with a “lawful intercept” module added. (“Lawful intercept” is an Orwellian phrase for “thing which allows people to bypass the security features of your system.” Setting aside any arguments of “should we as a society have such a mechanism?” it’s possible to assess the technical security implications of adding such mechanisms.)

?It should be obvious that Figure 9-2 is more secure than Figure 9-3. Using software-centric modeling, Figure 9-3 adds two data flows and a process; thus, by STRIDE-per-element, it has an additional 12 threats (tampering, information disclosure, DoS with each flow, for 6; and the six S,T,R, I, D, and E threats against the process for a total of 12). Additionally, Figure 9-3 has two apparent groupings of elevation-of-privilege threats: those posed by outsiders and those posed by software-allowed, but human-policy-violating, use. Thus, if Figure 9-2
has a list of threats (1…n), then Figure 9-3 has a list of threats (1…n+14).

A Lawful Access Threat Model

If instead of software-centric modeling you use attacker-centered modeling on the systems shown in Figures 9-2 and 9-3, you find two sets of threats: First, each law enforcement agency that is authorized to connect adds its employees and IT systems as possible threats, and possible threat vectors. Second, attackers are likely to attack these features of the system to abuse them. The 2010 “Aurora” attacks on Google and others allegedly did exactly this (McMillan, 2010, and Adida, 2013). Thus, by comparing them you can see that the addition of these features creates additional risk. You might also wonder where those risks fall, but that’s outside the scope of this example.

More subtly, the addition of the code in Figure 9-3 is an obvious source of security vulnerabilities. As such, it may draw attention and possibly effort away from the rest of the system. Thus, the components that comprise Figure 9-2 are likely to be less secure, even ignoring the threats to the additional components. In the same vein, the requests and implementations for such back- doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to
bypass other security controls, any weaknesses are likely to have outsized impact.

The technical arguments are simple. All other things being equal, systems with backdoors are unavoidably less secure, and probably worse than that. American companies cannot be competitive if the government forces us to add them.

[Update, July 7, 2015: A group of experts has released a longer paper, “Keys Under The Doormat.” Blogs from Ross Anderson and Steve Bellovin give additional perspective.]

The New Cyber Agency Will Likely Cyber Fail

by adam on February 10, 2015

The Washington Post reports that there will be a “New agency to sniff out threats in cyberspace.” This is my first analysis of what’s been made public.

Details are not fully released, but there are some obvious problems, which include:

  • “The quality of the threat analysis will depend on a steady stream of data from the private sector” which continues to not want to send data to the Feds.
  • The agency is based in the Office of the Director of National Intelligence. The world outside the US is concerned that the US spies on them, which means that the new center will get minimal cooperation from any company which does business outside the US.
  • There will be privacy concerns about US citizen information, much like there was with the NCTC. For example, here.
  • The agency is modeled on the National Counter Terrorism Center. See “Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists (2007)“. A new agency certainly has upwards of three years to get rolling, because that will totally help.
  • The President continues to ask the wrong questions of the wrong people. (“President Obama wanted to know the details. What was the impact? Who was behind it? Monaco called meetings of the key agencies involved in the investigation, including the FBI, the NSA and the CIA.” But not the private sector investigators who were analyzing the hard drives and the logs?)

It’s all well and good to stab, but perhaps more useful would be some positive contributions. I have been doing my best to make those contributions.

I sent a letter to the Data.gov folks back in 2009, asking for more transparency. Similarly, I sent an open letter to the new cyber-czar.

The suggestions there have not been acted apon. Rather than re-iterate them, I believe there are human reasons why that’s the case, and so in 2013, asked the Royal Society to look into reasons that calls for an NTSB-like function have failed as part of their research vision for the UK.

Cyber continues to suck. Maybe it’s time to try openness, rather than a new secret agency secretly doing analysis of who’s behind the attacks, rather than why they succeed, or why our defenses aren’t working. If we can’t get to openness, and apparently we cannot, we should look at the reasons why. We should inventory them, including shame, liability fears, customers fleeing and assess their accuracy and predictive value. We should invest in a research program that helps us understand them and address them so we can get to a proper investigative approach to why cyber is failing, and only then will we be able to do anything about it.

Until then, keep moving those deck chairs.

What CSOs can Learn from Pete Carroll

by adam on February 6, 2015

Pete Carroll

If you listen to the security echo chamber, after an embarrassing failure like a data breach, you lose your job, right?

Let’s look at Seahawks Coach Pete Carroll, who made what the home town paper called the “Worst Play Call Ever.” With less than a minute to go in the Superbowl, and the game hanging in the balance, the Seahawks passed. It was intercepted, and…game over.

Breach Lose the Super-Bowl
Publicity News stories, letters Half of America watches the game
Headline Another data breach Worst Call Play Ever
Cost $187 per record! Tens of millions in sponsorship
Public response Guessing, not analysis Monday morning quarterbacking*
Outcome CSO loses job Pete Caroll remains employed

So what can the CSO learn from Pete Carroll?

First and foremost, have back to back winning seasons. Since you don’t have seasons, you’ll need to do something else that builds executive confidence in your decision making. (Nothing builds confidence like success.)

Second, you don’t need perfect success, you need successful prediction and follow-through. Gunnar Peterson has a great post about the security VP winning battles. As you start winning battles, you also need to predict what will happen. “My team will find 5-10 really important issues, and fixing them pre-ship will save us a mountain of technical debt and emergency events.” Pete Carroll had that—a system that worked.

Executives know that stuff happens. The best laid plans…no plan ever survives contact with the enemy. But if you routinely say things like “one vuln, and it’s over, we’re pwned!” or “a breach here could sink the company!” you lose any credibility you might have. Real execs expect problems to materialize.

Lastly, after what had to be an excruciating call, he took the conversation to next year, to building the team’s confidence, and not dwelling on the past.

What Pete Carroll has is a record of delivering what executives wanted, rather than delivering excuses, hyperbole, or jargon. Do you have that record?

* Admittedly, it started about 5 seconds after the play, and come on, how could I pass that up? (Ahem)
† I’m aware of the gotcha risk here. I wrote this the day after Sony Pictures Chairman Amy Pascal was shuffled off to a new studio.

Security 101: Show Your List!

by adam on January 5, 2015

Lately I’ve noted a lot of people quoted in the media after breaches saying “X was Security 101. I can’t believe they didn’t do X!” For example, “I can’t believe that LinkedIn wasn’t salting passwords! That’s security 101!”

Now, I’m unsure if that’s “security 101” or not. I think security 101 for passwords is “don’t store them in plaintext”, or “don’t store them with a crypto algorithm you designed”. Ten years ago, it would have included salting, but with the speed of GPU crackers, maybe it doesn’t anymore. A good library would probably still include it. Maybe LinkedIn was spending more on preventing XSS or SQL injection, and that pushed password storage off their list. Maybe that’s right, maybe it’s wrong. To tell you the truth, I don’t want to argue about it.


What I want to argue about is the backwards looking nature of these statements. I want to argue because I did some searching, and not one of those folks I searched for has committed to a list of security 101, or what are the “simple controls” every business should have.

This is important because otherwise, hindsight is 20/20. It’s easy to say in hindsight that an organization should have done A or B or C. It’s harder to offer up a complete list in advance, and harder yet to justify the budget required to deploy and operate it.

So I’m going to make three requests for 2015:

  • If you’re an expert (or even play one on the internet), and if you want to say “X is Security 101,” please publish your full list of what’s “101.”
  • If you’re a reporter and someone tells you “X is security 101” please ask them for their list.
  • Finally, if you’re someone who wants to see security improve, and you hear claims about “101”, please ask for the list.

Oh, and since it’s sauce for the gander, here’s my list for individuals:

  • Stay up to date–get most of your machines on the latest revisions of software and get patches for security issues installed, especially in your browser and AV software.
  • Use a firewall that blocks most inbound traffic.
  • Ensure you have a working backup of your data.

(There are complex arguments about AV software, and a lack of agreements about how to effectively test it. Do you need it? Will it block the wildlist? There’s nuance, but that nuance doesn’t play into a 101 list. I won’t be telling people not to use AV software.)


*By “lately,” I meant in 2012, when I wrote this, right after the Linkedin breach. But I recently discovered that I hadn’t posted.

[Update: I’m happy to see Ira Winkler and Araceli Treu Gomes took up the idea in “The Irari rules for declaring a cyberattack ‘sophisticated’.” Good for them!]