by adam on June 11, 2009
Caspar Bowden chaired session 3, on usability.
Andrew Patrick NRC Canada (until Tuesday), spoke about there being two users of biometric systems: the purchaser or system operator and the subject. Argues that biometrics are being rolled out without a lot of thought for why they’re being used, when they make sense and when not. Canada has announced that they will be fingerprinting all visitors soon, and is experimenting with remote (6-10 feet) iris scanners. NIST has data from US-Visit on fingerprints & photos. Both are poor quality, fingerprints largely in the case of older folks for fingerprints. Mentions paper, “The Perception of Biometric Technology” which shows acceptance is correlated with purpose. Fingerprints are left all over. Andrew asserts we should publish our fingerprints. (Adam adds: we are.) Relates biometrics to nuclear waste in the lack of public discussion. Papers: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems
Luke Church, Cambridge discussed HCI and security. Transformation from HCI to User Centered Design. Values: Concretization: talk about personas and scenarios. Direct manipulation of objects. Economy of usability: make the common easy. SER cycle. Zittrain (not present) argues in “The Future of the Internet” that security decisions are being embedded in tech. Driven by desire for less interference. Alma Whitten’s abstraction and feedback. Original goal: control over information and technology. Discusses control by appliance, and how it gives too much power to the technologist (DRM, perfect enforcement, norming). We need tools for expressing meaningful end user control. Shows a Facebook privacy dialog, looks simple, says there are 86 of them. Seek ways to leave the “last mile” of design to users. (Adam adds: it’s the first mile, damnit. Put the user at the center.) Papers: The User Experience of Computer Security, Usability and the Common Criteria
Diana Smetters, PARC talked about meeting the user halfway. You can teach users, you just can’t teach them very much. Requires careful design of what you teach them. Users want to be secure, need to get their job done. Ask questions like: what should the model be for user accounts in home? (Not timeshare/business/mainframe). How should web sites authenticate themselves? How do we make it safe to click links in emails? Why is this hard? Have to give up on what you think would be good for them… everybody else is conspiring against you. Discussed phishing as a “mismatch problem.” Built a set of protected bookmarks and deliver single-site browser instances. What should a user do when they see the browser gives a cert warning? Ignore it. Most certs on the internet are broken. (Didn’t catch the citation.) People are rational: if attacks are rare, people will ignore them. With anti-virus, we increased the number of attacks until people ran anti-virus. Claims that it might be a good idea to throw out the baby with the bathwater. Some papers:
Breaking out of the browser to defend against phishing attacks, Building secure mashups, Ad-hoc guesting: when exceptions are the rule
Rob Reeder, Microsoft talked about “social recovery” for lost passwords. Users pre-designate trustees who can provide for recovery. Can use arbitrary k/n groups of trustees. Ideally, the trustee talks to the person who lost the password. Showing slides of increasingly trustworthy reasons to think it’s the right person, other various attacks and defenses. Good recovery, good resistance to email attacks. Medium resistance to highly-personalized attacks (spouses, etc). It’s Not What You Know, but Who You Know: A Social Approach to Last-Resort Authentication.
Lorrie Cranor, CMU talked about warnings. Best thing to do: fix problem, then guard, then warn. Too often, we warn the user, turns out not to be a problem, but people get habituated. Has a great dialog “you need to click ok to get on doing work.” Need A Framework for Reasoning
About the Human in the Loop. Suggests blocking real dangers, allowing low dangers, and requiring user to decide only when there’s a middle level. Did a study with real and new browser warnings. Asked “what type of site are you trying to reach?” Firefox 3 warnings: only 1/2 were able to figure out how to override the warnings. Conclusion: warnings are only so effective. Need to think beyond warnings. Timing Is
Everything? The Effects of Timing and Placement of Online Privacy Indicators, School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings
Jon Callas, agreed to be the pessimist. (The optimist thinks this is the best of all possible worlds, the pessimist is afraid she’s right.) Talks about cliffs we need to scale versus ramps that we can walk up. Maybe dictates are ok. Can users really make effective decisions? Do they want to? Discusses uncle who was an aerospace engineer, but is now in his 80s, doesn’t grok computers. Risks of unencrypted email are small, hard to describe or estimate. Seatbelts as metaphors: easy to visualize. People die for not wearing seatbelts. Left to themselves, people wear seatbelts at a rate of 17-20%. So if people don’t wear seatbelts, why would they encrypt their email. Agree: you can teach them, but you can’t teach them much. Coming to the opinion that we’re going to have to start making decisions for others, because it doesn’t work otherwise.
Improving Message Security With a Self-Assembling PKI
Questions: Markus Jakobsson asked Rob if people would use themselves as trustees with an alternate email account. Yes, but that’s ok. People should have a range of account recovery options.
Peter Neumann commented that the literal interpretation of Descartes is the better is the enemy of the good, and that by inversion the very bad is the enemy of the worst. Asked what’s good enough. Jon Callas says we want to move up from the bottom of the cliff. Luke asserts that the number of people who get hurt is small enough that it makes the newspapers. Chris Soghoian says different sites have different motivations, cites Ross who says Facebook wants to dump hard questions on users so they can blame them. Luke takes issue, says not clear that Facebook knows what to do, or how to do it. Usability is hard.
John Muller (sp?) says its ok to read his email, it’s boring. Jon Callas responds that people often don’t know what data is on their systems, but that it may be ok to have insecure emails.
Jean Camp says its fascinating in 2009 to hear that the happy libertarian market pony is going to solve the problem. Addresses John Muller & Luke. Says “I don’t know how bad it is.” Only just now starting to look at security as a macro problem. Luke responds agreed, don’t know how bad problem is. Worries about trading future uses of technology when we don’t know how bad the problem is.
(Didn’t catch questioner name) whitelist question: how long until RIAA imposes whitelists? How long until banks restrict market entry? Diana responds that you can choose your own whitelist provider. Jon Callas adds that 40% of malware is signed code. EV signed phishing sites are starting to crop up. If our education really worked, we’d have a catastrophic failure.
Dave Clark says he’s thinking about the topic which is supposed to be usability. Maybe people are happy without their seatbelts. Says the conversation was half about usability, and asks “do we have a taxonomy of what we’re talking about.” Diana Smetters responds (missed it).
Allan Friedman comments that general whitelists are hard. I ask why we should expect whitelists to work. Ross suggests that we create physical devices or ceremonies. (When you put your phone within a foot of your computer, you can only go to a whitelisted site.) (Missed some questions.)
Jeff Friedberg asks Lorrie about guidance given to test subjects–do you tell them what to do when you contact a site? Lorrie’s experiment included an alternate route, which involved making a phone call.
Joseph Bonneau argues that social networking sites can’t use privacy as a selling point. Says that sites don’t want users to be confused but do want to steer people towards openness. No incentives to wrong thing, but nudges to avoid bothering. Luke responds that there’s ongoing social negotiation over use of these sites.
Jeff Friedberg asks for more on trusted advisors: what’s worked, what hasn’t, and what research is ongoing?