I have no idea how this paper came about, but I'm glad it did. The first author is a computer-science professor at Cambridge University specializing in privacy and systems security. The second author is a professional scam artist and stage magician who demonstrates real-world scams on unsuspecting victims as part of a BBC television show. Together, they fight crime!
Well, yes, actually. They do... in a very academic sort of way. The whole purpose of that BBC show (aside from entertaining the audience and selling advertising) is to show how these scams actually work so that people can avoid them in the future. This 'paper' (actually, an unrefereed technical report) attempts to distill them down to some general principles in the hopes that computer-security systems will start taking them into account. But first, it describes a number of scams as they were perpetrated on the TV show. My favorites:
2.5.1 Jewellery shop scam (S1-E1)
...Jess attempts to buy an expensive necklace but is then “arrested” by Alex and Paul who expose her as a well-known fraudster, notorious for paying with counterfeit cash. The “cops” take the “fraudster” to the police station and collect in an evidence bag the “counterfeit” (actually genuine) cash and, crucially, the necklace, which of course “will be returned”. The jeweller is extremely grateful that the cops saved her from the evil fraudster. As Jess is taken away in handcuffs, the upset jeweller spits out a venomous “Bitch! You could have cost me my job, you know that?”.
(In case you are wondering, the show claims to return all the scammed money and merchandise to the victims.) From their description of Three Card Monte:
What this so-called “game” really is, instead, is something quite different, namely a cleverly structured piece of street theatre designed to attract passers-by and hook them into the action. The sleight-of-hand element is actually the least important since it is the way the marks are manipulated, rather than the props, that brings in the money. It’s all about the crowd of onlookers and players (all shills) betting in a frenzy and irresistibly sucking the mark into wanting a part of the action. One shill makes a quick and tidy profit by betting and winning, “proving” to the mark that the game is not rigged. Another shill loses money on an “easy” shuffle where the mark clearly sees that he, instead, had guessed correctly, which makes him feel “more clever” than that other player. In S1-E1, one shill (Jess, the cute girl) even gives the mark some money and asks him to bet (and win) on her behalf. Once the mark starts to bet (and of course lose) his own money, the shills find ways to keep him going for more...
The paper has lots more. I'm particularly fond of "Van Dragging". (Go read it.)
From these descriptions, the paper derives seven principles. Among them:
3.2 The Social Compliance principle
Society trains people not to question authority. Hustlers exploit this “suspension of suspiciousness” to make you do what they want.
(See the Jewellery shop scam, above). Also:
3.3 The Herd principle
Even suspicious marks will let their guard down when everyone next to them appears to share the same risks. Safety in numbers? Not if they’re all conspiring against you.
This hope, as I said, is that computer-system designers will read these principles and take them to heart. No, not so they can go make a quick buck. Instead, so that they can design their systems to keep people from being exploited in these ways.
But how, exactly, are we to turn these principles into security mechanisms? Here is the paper's weak spot: it gives these great principles, but no real advice on how to act upon them. What does it mean to take the Herd principle into account? Should we make everyone's computer system act in its own idiosyncratic way just so that people can't take cues from everyone else? How about the Social Compliance principle? What are we supposed to do with that? If people can't tell real cops from fraudsters in real life, how are they supposed to do it over email? The paper doesn't say, which is a shame.
But this brings me to another important point: the magic 'user education' fairy dust. My field has a sad, tired inside joke about the 'magic security fairy dust'. Apparently (we joke) people think that we have some secret supply of this stuff that we can sprinkle on their systems at the very end of the design process to make them secure. (Poof!) Unfortunately, we have come to believe almost the same thing about user education. Why are computers so insecure? In part (we assert) because users are idiots and don't know what they are doing. If only the users were better educated about opening attachments / clicking on links in their email / updating their virus signatures / whatever, then (we say) the world would be a better place. That is, if only enterprises would sprinkle the magic user-education fairy dust on the user base then (poof!) the users would stop doing stupid insecure things.
Personally, I think that this is just passing the buck. Users are going to do what they need to do to get their job done, and I think a system that makes it natural to do something insecure is poorly designed. And the paper makes this point, too. But more damningly, the paper seems to imply that this entire argument is moot. No realistic amount of user education will keep the users from doing stupid insecure things-- not when the scammers are good enough. I mean, do we really think that more jeweler-education is needed to stop jewelers from giving jewelry to strangers? Of course not. They already know not to do that. Duh. But... but in the heat of the moment, when they're good and mad about the lassie who was about to cheat them with fake money, and the police officer wants the jewelry as evidence to be used to put her away... Well, it's not really giving jewelry to strangers, is it? It's entrusting evidence to a law enforcement authority.
My point is this: we can probably stop this scam by teaching jewelers about this particular scam. But then the scammers will just invent another one. And we'll have to teach jewelers about that one. And the next one. And the next one. And soon, we'll have to teach jewelers about so many scams that they start skipping our classes just to get on with their lives. No realistic amount of eduction can protect jewelers from all possible scams. And if that's true in real life, how do we expect user-education to magically secure life on-line?