by adam on May 9, 2013
There’s an important and interesting new breach disclosure that came out yesterdau. It demonstrates leadership by clearly explaining what happened and offering up lessons learned.
- It shows the actual phishing emails
- It talks about how the attackers persisted their takeover by sending a fake “reset your password” email (more on this below)
- It shows the attacker IP address (220.127.116.11)
- It offers up lessons learned
Unfortunately, it offers up some Onion-style ironic advice like “Make sure that your users are educated, and that they are suspicious of all links that ask them to log in.” I mean, “Local man carefully checks URLs before typing passwords.” Better advice would be to have bookmarks for the sites you need to log-in to, or to use a password manager that knows what site you’re on.
The reset your password email is also fascinating. (“The attacker used their access to a different, undiscovered compromised account to send a duplicate email which included a link to the phishing page disguised as a password-reset link. This dupe email was not sent to any member of the tech or IT teams, so it went undetected. “) It shows that the attackers were paying attention, and it allows us to test the idea that, ummm, local man checks URLs before typing passwords.
Of course, I shouldn’t be too harsh on them, since the disclosure was, in fact, by The Onion, who is now engaged in cyberwar with the Syrian Electronic Army. The advice they offer is of the sort that’s commonly offered up after a breach. With more breaches, we’ll see details like “they used that account to send the same email to more Onion staff at about 2:30 AM.” Do you really expect your staff to be diligently checking URLs when it’s 2:30 AM?
Whatever you think, you should read “How the Syrian Electronic Army Hacked The Onion,” and ask if your organization would do as well.
by adam on May 4, 2013
To celebrate Star Wars Day, I want to talk about the central information security failure that drives Episode IV: the theft of the plans.
First, we’re talking about really persistent threats. Not like this persistence, but the “many Bothans died to bring us this information” sort of persistence. Until members of Comment Crew are going missing, we need a term more like ‘pesky’ to help us keep perspective.
Kellman Meghu has pointed out that once the breach was detected, the Empire got off to a good start on responding to it. They were discussing risk before they descend into bickering over ancient religions.
But there’s another failure which happens, which is that knowledge of the breach apparently never leaves that room, and there’s no organized activity to consider questions such as:
- Can we have a red team analyze the plans for problems? This would be easy to do with a small group.
- Should we re-analyze our threat model for this Death Star?
- Is anyone relying on obscurity for protection? This would require letting the engineering organization know about the issue, and asking people to step forward if the plans being stolen impacts security. (Of course, we all know that the Empire is often intolerant, and there might be a need for an anonymous drop box.)
If the problem hadn’t been so tightly held, the Empire might not have gotten here:
General Bast: We’ve analyzed their attack, sir, and there is a danger. Should I have your ship standing by?
Grand Moff Tarkin: Evacuate? In our moment of triumph? I think you overestimate their chances.
There are a number of things that might have been done had the Empire known about the weakly shielded exhaust port. For example, they might have welded some steel beams across that trench. They might put some steel plating up near the exhaust port. They might land a Tie Fighter in the trench. The could deploy some storm troopers with those tripod mounted guns that never quite seem to hit the Millenium Falcon. Maybe it’s easier in a trench. I’m not sure.
What I am sure of is there’s all sorts of responses, and all of them depend on information leaving the hands of those six busy executives. The information being held too closely magnified the effect of those Bothan spies.
So this May the Fourth, ask yourself: is there information that you could share more widely to help defend your empire?
by adam on April 22, 2013
We’ve been hearing for several years that we should assume breach. Many people have taken this to heart (although today’s DBIR still says it’s still months to detect those breaches).
I’d like to propose (predict?) that breach as a central concept will move through phases. Each of these phases will go through a hype cycle, and I think of them as sort of a trilogy.
We all understand “Assume Breach,” so let’s move on to “Confirm Breach.”
Confirm Breach will be a cold place. Our heroes will be on the run from an evil empire whose probes penetrate to every corner of the network. Over-dependence on perimeter defenses will be shown to be vulnerable to big, clumsy social engineering attacks. Okay, okay, I’m working too hard for the Empire Strikes Back angle here. But really, no one really wants to confirm a breach. We are running from APT, and we really do over-depend on perimeter defenses. As we get more comfortable with the fact that confirm a breach rarely hurts the breached organization very much, we’ll start to see less reticence to confirm breaches.
Recently, I was talking to someone whose organization had banned the term “breach” so they don’t have to report. That’s going to raise eyebrows and look more and more churlish and unsustainable.
Organizations and their counsel will start to realize that the broad message from Congress and the Executive Branch in the US, and Privacy Commissioners and Legislatures elsewhere is to disclose incidents. Their willingness to contort themselves to avoid such disclosure is going to drop. First the need to do so and then the professionalism of those offering such advice will be called into question by other lawyers.
In the meanwhile, legislators and then legislatures will get tired of lawyers playing word games, and propose stricter and stricter laws. For example, Lexology reports that the Belgian Privacy Commissioner is asking for breach notification within 48 hours. Such a requirement risks pulling firefighters from a fire, and putting them on form-filling. And it’s a direct response to the ongoing delays in reporting breaches without a clear explanation of why it took so long.
That will lead to an era of “Discuss Breach.”
Once we get to a point where breach confirmations are routine, we can look forward to really discussing them in depth, and understand the root cause, the controls that were in place, the detective mechanisms that worked, and the impact of the incident.
When we’re in the world of Discuss Breach, the pace at which things will get better will accelerate dramatically.
(In the future, someone will make a bad trilogy about deny breach, assume mitochlorians, and we’ll all pretend it didn’t happen.)
by adam on April 19, 2013
Following up on my post on exploit kit statistics (no data? really folks?), I wanted to share a bit of a head-shaker for a Friday with way too much serious stuff going on.
Thinking would be welcome.
by adam on April 11, 2013
On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels:
There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP number is perhaps lower than “common wisdom” in the security community would have it.) There are also numbers for the success rates of exploits, ranging from Java OBE at 35% down to MDAC at 1.85%.
I’m fascinated by these numbers, and have two questions:
- Is anyone capturing the statistics shown and running statistics over time?
- Is there an aggregation of all these captures? If not, what are the best search terms to find them?
by adam on April 10, 2013
Thanks to Addison Wesley, who are offering 40% off the book. Apply code NEWSCHOOL40 to get your discounted copy. (You apply the code after proceeding to checkout.)
by Russell on April 9, 2013
As it happens, both the US Government and the UK government are leading “cyber security standards framework” initiatives right now. The US is using a consensus process to “incorporate existing consensus-based standards to the fullest extent possible”, including “cybersecurity standards, guidelines, frameworks, and best practices” and “conformity assessment programs”. In contrast, the UK is asking for evidence that any proposed standard or practice is beneficial or even “best”.
The Brits are doing it right. I hope the US follows their lead.
by adam on April 8, 2013
Five years ago Friday was the official publication date of The New School of Information Security. I want to take this opportunity to look back a little and look forward to the next few years.
Five years ago, fear of a breach and its consequences was nearly universal, and few people thought anything but pain would come of talking about our problems. Many people found it shocking when we challenged best practices, or asked if there was evidence for the ways we invested in security. I’d like to think we played some small role in how the culture of information security has changed. I’m hopeful that culture will continue to evolve in ways that focus on outcomes and data about those outcomes. At the same time, as I reflect, I go back to what Andrew and I wrote.
We wrote that the New School of Information Security is:
- Learning from other professions, such as economics and psychology; to unlock the problems that stymie the security field. The way forward cannot be found solely in mathematics or technology.
- Sharing objective data and analysis widely. A fetish for secrecy has held us back.
- The embrace of the scientific method for solving important security problems. Analyzing real-world outcomes is the best way for information security to become a mature discipline.
We’ve seen tremendous movement in the sharing of objective data. From the DBIR to Mandiant’s report to revelations from Google, RSA, Bit9 and many others, we see people willing to talk about what went wrong. Sure, they sometimes add some spin, but that’s human nature. We’re seeing data being shared, or as I now like to say, published. We can’t take credit for that. Lots of people did a lot of hard work to convince their organizations to publish that data, and we’re learning from it and collections like the Open Security Foundation’s dataset.
We’ve also heard from countless folks about how much they liked the book, how it’s influenced their thinking and their actions, and that’s been a wonderful return on our work.
What we haven’t seen as much of is learning from other professions, such as economics and psychology. It’s still to common to complain that people will click on anything, we still argue with a paucity of data about if training people makes any sense. (Although if you have any data, I’d love to get it some attention at BlackHat.)
We also haven’t yet seen a lot of published data on the effectiveness of various security investments. As far as I know, no compliance regime yet requires breached entities to report back to those who create the standard about what went wrong, perpetuating the wicked environment in which we work, and wasting the time and money of those who need to comply.
Sadly, the pervious two paragraphs relate to what we wrote in chapters 5 and 6. For those of you who enjoyed the book, let me ask you to re-read them. For those of you who haven’t yet read it, now’s a great time. [Update: Even better, Addison Wesley is offering 40% off with code NEWSCHOOL40 to help us celebrate! Apply the code after proceeding to checkout.]
Andrew and I remain optimistic that our world can get better, and we’re proud to have helped illuminate a path forward.
by adam on April 3, 2013
According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”
Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)
With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.
So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.
Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”
However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.
Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.
Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.
It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.
by adam on April 1, 2013
Hacking humans is an important step in today’s exploitation chains. From “2011 Recruitment plan.xls” to instant messenger URL delivery at the start of Aurora, the human in the loop is being exploited just as much as the machine. In fact, with the right story, you might not even need an exploit at all.
So I’m looking to be able to put together an awesome track on hacking humans for Black Hat USA 2013. I’d love work on things like:
- Unusable security or privacy (preferably with user studies)
- The cognitive science of attention
- Conditioned-safe ceremonies
- Measuring the effects of security awareness training
- Human compliance budgets
- Threat modeling techniques for user interfaces
- What any of these attacks might teach defenders about user interface design.
- Engineering for real world human error
- New ceremony analytic techniques
- New frameworks for thinking about hacking humans, or defending against attacks on people
- This list is incomplete
At BlackHat we like talks about hacking stuff. We like technical talks. We don’t like pure theory, without demonstrated application. We don’t have talks about getting a UPS uniform.
If you have such content, I encourage you to check out the Black Hat Call for Papers and consider submitting by April 15th.