by Chandler on November 19, 2009
Cormac Herley at Microsoft Research has done us all a favor and released a paper So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users which opens its abstract with:
It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective.
And you know it’s going to be good when they write:
Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.
People are not stupid. They make what we, as relative experts on the topic of security, perceive to be bad decisions, but this paper argues that their behavior is rational.
[W]e argue for a third view, which is that users’ rejection of the security advice they receive is entirely rational from an economic viewpoint. The advice oers to shield them from the direct costs of attacks, but burdens them with increased indirect costs, or externalities. Since the direct costs are generally small relative to the indirect ones they reject this bargain. Since victimization is rare, and imposes a one-time cost, while security advice applies to everyone and is an ongoing cost, the burden ends up being larger than that caused by the ill it addresses.
The paper provides both a good and accessible overview of externalities and rational behavior using spam as an example.
For example, Kanich et al.  document a campaign of 350 million spam messages sent for $2731 worth of sales made. If 1% of the spam made it into in-boxes, and each message in an inbox absorbed 2 seconds of the recipient’s time this represents 1944 hours of user
time wasted, or $28188 at twice the US minimum wage of $7.25 per hour.
Coincidentally, we get a little over 300 million spam messages into our corporate email gateways every month, which means that I can compare the cost-per-delete-click (at $7.25/hour) against the cost of our corporate spam filtering contract without having to do any real math. Since we pay about $50,000/month for filtering. That means that we’re getting a pretty good deal, since our white-collar employees cost over $14/hour.
That’s just time that would be spent seeing and deleting the message, don’t forget. Fourteen Dollars per hour completely ignores the cost of attention disruption (much more than two seconds) and the Direct Losses, either because I cannot quantify, which causes the entire argument to appear specious in the eyes of Senior Leadership, or I am not at liberty to disclose enough detail to pass the “cannot quantify” test.
They then go on to document in fairly accessible models why password complexity, anti-phishing awareness, and SSL Errors are cost-inefficient, and get into a favorite topic of mine, the difficulty of defining security losses or the benefit from adding safeguards at the end-user level. This section should be mandatory reading for any security person who attempts to talk to non-security people about the topic–i.e. all of us.
What’s missing from the paper, though, is the next logical step of analysis, the appropriate Risk Management strategy in response to the information presented. Hopefully that will be the follow-on paper, because as it was, it felt like a bit of a cliff-hanger to me. All of the discussion assumes that mitigation is the only option. This may feel right from a Security perspective, but it’s probably not the correct risk management decision.
To manage the risk in these cases, though, I see a strong argument for risk transfer. High-Impact, Low-Likelihood events are best managed by aggregating the risk into a pool and spreading the cost across the pool, i.e. buying insurance against these losses. If you could buy anti-phishing insurance for $1/person/year (which, realistically, is multiples of what it could cost if 200 million people all bought in) rather than throwing large, uncoordinated piles of money at ineffective awareness training or technical countermeasures which will probably be out-innovated by the attackers in hours or days, why wouldn’t you?
Why have anti-virus vendors not thought of this? If your AV vendor said they would also insure you against Direct Losses (having your bank account cleaned out) for your $50/year subscription, would that differentiate them enough to win your business?
By all means, we should continue to work on the challenges of improving the security experience and reducing the risk of using computers. More accurately, though, we should be reducing the amount that must be experienced by users at all to improve security of their information and transactions.