Transparency: When Security Pros Get Popped

by David Mortman on January 7, 2014

Rich Mogul over at Securosis (N.B. I’m a contributing analyst there) has a great post on how, due to human error, some of his AWS credentials got nabbed by some miscreants and abused. We here at the New School love it when folks share how they were compromised and what they did about it. It is this sort of transparency that helps us all. Kudos to Rich for being willing to share his pain for our benefit.

Academic job opening at Cambridge

by adam on July 18, 2013

At Light Blue Touchpaper, Ross Anderson says “We have a vacancy for a postdoc to work on the psychology of cybercrime and deception for two years from October.”

I think this role has all sorts of fascinating potential, and wanted to help get the word out in my own small way.

Google Reader Going Away

by adam on June 30, 2013

Remarkably, some software that people host on your behalf, where you have no contract or just a contract of adhesion, can change at any time.

This isn’t surprising to those who study economics, as all good New School readers try to do. However, this is a reminder/request that when you move, please resubscribe to New School. We have some interesting announcements forthcoming, and will try to get more interesting content up soon.

Updated WordPress

by adam on June 26, 2013

Please let us know if you see anything strange

Workshop on the Economics of Information Security

by adam on May 24, 2013

The next Workshop on the Economics of Information Security will be held June 11-12 at Georgetown University, Washington, D.C. Many of the papers look fascinating, including “On the Viability of Using Liability to Incentivise Internet Security”, “A Behavioral Investigation of the FlipIt Game”, and “Are They Actually Any Different? Comparing 3,422 Financial Institutions’ Privacy Practices.”

Not to mention “How Bad Is It? – A Branching Activity Model to Estimate the Impact of Information Security Breaches” previously discussed here.

TrustZone and Security Usability

by adam on May 23, 2013

Cem Paya has a really thought-provoking set of blog posts on “TrustZone, TEE and the delusion of security indicators” (part 1, part 2“.)

Cem makes the point that all the crypto and execution protection magic that ARM is building is limited by the question of what the human holding the phone thinks is going on. If a malicious program app fakes up the UI, then it can get stuff from the human, and abuse it. This problem was well known, and was the reason that NT 3.51 got a “secure attention sequence” when it went in for C2 certification under the old Orange Book. Sure, it lost its NIC and floppy drive, but it gained Control-Alt-Delete, which really does make your computer more secure.

But what happens when your phone or tablet has a super-limited set of physical buttons? Even assuming that the person knows they want to be talking to the right program, how do they know what program they’re talking to, and how do they know that in a reliable way?

One part of an answer comes from work by Chris Karlof on Conditioned-safe Ceremonies. The essential idea is that you apply Skinner-style conditioning so people get used to doing something that helps make them more secure.

One way we could bring this to the problem that Cem is looking at would be to require a physical action to enable Trustzone. Perhaps the ceremony should be that you shake your phone over an NFC pad. That’s detectable at the gyroscope level, and could bring up the authentic payments app. An app that wanted payments could send a message into a queue, and the queue gets read by the payments app when it comes up. (I’m assuming that there’s a shake that’s feasible for those with limited motion capabilities.)

There are probably other conditioned-safe ceremonies that the phone creator could create, but Cem is right: indicators by themselves (even if they pass the white-hot parts COGs gauntlet) will not be noticed. If solution exists, it will probably involve conditioning people to do the right thing without noticing.

The Onion and Breach Disclosure

by adam on May 9, 2013

There’s an important and interesting new breach disclosure that came out yesterdau. It demonstrates leadership by clearly explaining what happened and offering up lessons learned.

In particular:

  • It shows the actual phishing emails
  • It talks about how the attackers persisted their takeover by sending a fake “reset your password” email (more on this below)
  • It shows the attacker IP address (46.17.103.125)
  • It offers up lessons learned

Unfortunately, it offers up some Onion-style ironic advice like “Make sure that your users are educated, and that they are suspicious of all links that ask them to log in.” I mean, “Local man carefully checks URLs before typing passwords.” Better advice would be to have bookmarks for the sites you need to log-in to, or to use a password manager that knows what site you’re on.

The reset your password email is also fascinating. (“The attacker used their access to a different, undiscovered compromised account to send a duplicate email which included a link to the phishing page disguised as a password-reset link. This dupe email was not sent to any member of the tech or IT teams, so it went undetected. “) It shows that the attackers were paying attention, and it allows us to test the idea that, ummm, local man checks URLs before typing passwords.

Of course, I shouldn’t be too harsh on them, since the disclosure was, in fact, by The Onion, who is now engaged in cyberwar with the Syrian Electronic Army. The advice they offer is of the sort that’s commonly offered up after a breach. With more breaches, we’ll see details like “they used that account to send the same email to more Onion staff at about 2:30 AM.” Do you really expect your staff to be diligently checking URLs when it’s 2:30 AM?

Whatever you think, you should read “How the Syrian Electronic Army Hacked The Onion,” and ask if your organization would do as well.

Security Lessons From Star Wars: Breach Response

by adam on May 4, 2013

To celebrate Star Wars Day, I want to talk about the central information security failure that drives Episode IV: the theft of the plans.

First, we’re talking about really persistent threats. Not like this persistence, but the “many Bothans died to bring us this information” sort of persistence. Until members of Comment Crew are going missing, we need a term more like ‘pesky’ to help us keep perspective.

Kellman Meghu has pointed out that once the breach was detected, the Empire got off to a good start on responding to it. They were discussing risk before they descend into bickering over ancient religions.

But there’s another failure which happens, which is that knowledge of the breach apparently never leaves that room, and there’s no organized activity to consider questions such as:

  • Can we have a red team analyze the plans for problems? This would be easy to do with a small group.
  • Should we re-analyze our threat model for this Death Star?
  • Is anyone relying on obscurity for protection? This would require letting the engineering organization know about the issue, and asking people to step forward if the plans being stolen impacts security. (Of course, we all know that the Empire is often intolerant, and there might be a need for an anonymous drop box.)

If the problem hadn’t been so tightly held, the Empire might not have gotten here:

Tarkin bast

General Bast: We’ve analyzed their attack, sir, and there is a danger. Should I have your ship standing by?

Grand Moff Tarkin: Evacuate? In our moment of triumph? I think you overestimate their chances.

There are a number of things that might have been done had the Empire known about the weakly shielded exhaust port. For example, they might have welded some steel beams across that trench. They might put some steel plating up near the exhaust port. They might land a Tie Fighter in the trench. The could deploy some storm troopers with those tripod mounted guns that never quite seem to hit the Millenium Falcon. Maybe it’s easier in a trench. I’m not sure.

What I am sure of is there’s all sorts of responses, and all of them depend on information leaving the hands of those six busy executives. The information being held too closely magnified the effect of those Bothan spies.

So this May the Fourth, ask yourself: is there information that you could share more widely to help defend your empire?

The Breach Trilogy: Assume, Confirm, Discuss

by adam on April 22, 2013

We’ve been hearing for several years that we should assume breach. Many people have taken this to heart (although today’s DBIR still says it’s still months to detect those breaches).

I’d like to propose (predict?) that breach as a central concept will move through phases. Each of these phases will go through a hype cycle, and I think of them as sort of a trilogy.


We all understand “Assume Breach,” so let’s move on to “Confirm Breach.”

Confirm Breach will be a cold place. Our heroes will be on the run from an evil empire whose probes penetrate to every corner of the network. Over-dependence on perimeter defenses will be shown to be vulnerable to big, clumsy social engineering attacks. Okay, okay, I’m working too hard for the Empire Strikes Back angle here. But really, no one really wants to confirm a breach. We are running from APT, and we really do over-depend on perimeter defenses. As we get more comfortable with the fact that confirm a breach rarely hurts the breached organization very much, we’ll start to see less reticence to confirm breaches.

Recently, I was talking to someone whose organization had banned the term “breach” so they don’t have to report. That’s going to raise eyebrows and look more and more churlish and unsustainable.


Organizations and their counsel will start to realize that the broad message from Congress and the Executive Branch in the US, and Privacy Commissioners and Legislatures elsewhere is to disclose incidents. Their willingness to contort themselves to avoid such disclosure is going to drop. First the need to do so and then the professionalism of those offering such advice will be called into question by other lawyers.

In the meanwhile, legislators and then legislatures will get tired of lawyers playing word games, and propose stricter and stricter laws. For example, Lexology reports that the Belgian Privacy Commissioner is asking for breach notification within 48 hours. Such a requirement risks pulling firefighters from a fire, and putting them on form-filling. And it’s a direct response to the ongoing delays in reporting breaches without a clear explanation of why it took so long.

That will lead to an era of “Discuss Breach.”

Once we get to a point where breach confirmations are routine, we can look forward to really discussing them in depth, and understand the root cause, the controls that were in place, the detective mechanisms that worked, and the impact of the incident.

When we’re in the world of Discuss Breach, the pace at which things will get better will accelerate dramatically.

(In the future, someone will make a bad trilogy about deny breach, assume mitochlorians, and we’ll all pretend it didn’t happen.)

The best part of exploit kits

by adam on April 19, 2013

Following up on my post on exploit kit statistics (no data? really folks?), I wanted to share a bit of a head-shaker for a Friday with way too much serious stuff going on.

Sometimes, researchers obscure all the information, such as this screenshot. I have no idea who these folks think they’re protecting by destroying information like this, but what do you expect from someone whose web site requires javascript from 4 domains to render a basic web page? (bad HTML here).

Thinking would be welcome.