by adam on June 11, 2009
Julie Downs studied users who were going through an email inbox full of phishing emails, while doing a talk-aloud. They also did interviews afterwards. People with incidents get very sensitive to risks, but don’t get any better at identifying phishing emails. What really helps is contextualized understanding. Do they know what a URL is? Do they know how a scam works? Working on new educational programs to help people avoid phishing.
Jean Camp, People Taking Risks
Covering three experiments with two big points. Wants to make “security not usable but used.” People who recycle have computers which are part of botnets. 4 steps: create mental models from internal narratives about risks. Mitigate relevant risks, contextualize risk as relevant, use narrative to increase desire and ability to use tools. Student did a study: show a video about a strange landlord with a nice apartment and a barren one. You can rent the cheap one, and access the nice one on demand. Then people created accounts and 6 of 6 created limited user accounts of various types.
Videos of PGP installation problems which are pretty funny. Encouraged Jon Callas to get some coffee while she showed them.
Lesson: No communication is welcome if badly timed. You have to tell stories.
Matt Blaze Kentucky Voting Fraud
Has been involved in analysis of electronic voting machines. “In every case the reviews confirmed the worst fears”..”Buffer overflows in every interface reading from an untrusted source..” “The nicest thing we could say is there didn’t seem to be any way to injure the voter.” In March, all voting officials were indicted in Clay County Kentucky. Exploit was not one of the ones review teams found. Not very high-tech attack. Altered instructions to not include the “confirm” button which was needed to commit the vote. Did it in first election after they got the machines. Electronic vote rigging in Kentucky.
Created a “big picture” of security titled “internet battlefield.” Talks about variety of attacks, and how attacks are converging on the user and their perception. Leads to thinking about “trust user experience.” (TUX) Times that users have to make trust decisions. Three key components: UI, underlying architecture and mental models. Mistakes result in a range of harm. Consumer & enterprises. More data online. User placed in center. Thinks of it as ‘first two feet of trust.’
Security of secret questions. I blogged this here. Short form: all secret question systems have serious shortcomings. Users can’t remember all the failure modes. Letting users write their own questions has its own problems. “What’s the sports team you love to hate?” “What’s my favorite kink?” Will be presenting further work at SOUPS this summer.
Tyler Moore, Exploiting Human Nature in the Design of Attacker Infrastructure.
Standard view: either pessimistic assumptions or colorful anecdotes. Tyler is working to be empirical observation. (Cool!) Attackers are effective in exploiting human failings in IT professionals. Attackers need to trick users, need infrastructure to carry out attacks. Human failings make the attacker’s job of infrastructure hosting easier. Defenders fail to cooperate. Security firms (“takedown”) won’t share lists of phishing websites. Firms don’t help victims understand hack, and vulnerable websites are compromised over and over again. Real impersonated (non-bank) companies don’t care about impersonation.
Angela Sasse asks Jeff about his use of the word trust, definition includes truster knowing about vulnerability. Attackers are exploiting reliance. Jeff responds that there’s a lot of subtleties, including those which Angela mentions and habituation. Angela: when designing, think about how to exploit routine behavior.
Chris Soghoin asks Julie about email. Missed the answer.
Andrew Adams asks about the mutual authentication experience, or lack thereof. What should we do? Jeff mentions economics, especially friction. Stuart adds “unless everyone trains the user properly,” it makes sense. A really well-written email will win. Jean adds that she doesn’t want a relationship with her bank or software provider-it means she’s investing in it. She sees me frown, and I comment that, as a taxpayer, she is investing in her bank. Suggests “at a bank” is one bit that would be useful. Luke (?) did research on what concepts people describe correctly: simple concepts get described better.
Lorrie Cranor comments to Matt about all the smart computer scientists doing research-where UI experts or voting experts involved? Matt comments that it’s hard to know if you’re being holistic enough. Peter Neumann adds that if you look at the full set of reports from California most of the docs showed that the systems were not compatible with the documentation. Conclusion: even with all the analysis tools, can’t understand all the flaws, can’t protect with 70 year old poll workers. Need end-to-end reliability. The idea of a perfect solution here is ridiculous. Matt says the most interesting bit is how quickly the attackers found the problem. Jean Camp says you have to over-invest in voting security.
[Update: Bruce Schneier’s session 2 notes.]