next up previous
Next: How can ``good security'' Up: COMP 5407 Project 2: Previous: COMP 5407 Project 2:

Introduction

Even the strongest security system can be broken using a sticky note. The first step is simple: a legitimate user writes down his or her password on the sticky note and leaves it in a convenient location, such as the top drawer in their desk, or even stuck to his or her computer screen. The next step is also simple: any attacker can then walk by and read the note. Once a legitimate password has been obtained, it can be fairly easy to gain access to a system.

All the rigorously-checked cryptography in the world can't help if a legitimate user discloses a password, yet many users write down their passwords and paste these notes to a computer screen for easy access. Sure, it makes things easy for the user, but it's also easy for anyone else in the building. Sometimes an attacker may not even need to gain access to a building - walking by outside and looking through the window is sufficient.

Nielsen makes a bold but probably not unreasonable claim:

``Take a walk around any office in the world and you can collect as many passwords as you like by

Now, it's easy to blame the user. But placing blame doesn't make things more secure. And is the blame even in the right place? Patrick mentions that the ``human error" approach was abandoned by airlines with good results: now that planes and related equipment are designed with human needs and limitations in mind, flight safety has greatly improved [Patrick, 2002].

Norman's successful book, The Design of Everyday Things [Norman, 1988] discusses the role of poor design in causing what many people consider to be human error. Many of his theories have been applied to the design of many ``things," but for some reason, so-called secure systems often ignore the design rules he proposed and administrators of such systems continue to blame the user for design failures.

By moving on from assuming users to be the source of problems, security experts can begin to look at the real causes of error. Technology can play a large role in preventing errors, but it can also play a large role in forcing them.

Unless system designers understand how users actually use systems, they risk encouraging users to make insecure choices, such as disclosing passwords, by enforcing the very rules intended to increase security. Section 2 discusses how while some password rules may theoretically make for better security, the reality is a very different story.

The perception is often that usability and security are opposing goals rather than needs that must both be taken into account. A quote attributed to Oliver Elphick goes, ``Make it idiot-proof, and someone will breed a better idiot." This is probably the way in which many security experts see their security-breaching users. But the users see it quite differently: The system was getting in the way of their work, so they just worked around it to ensure that more important things got done. Section 3 looks at user considerations that can lead to users making insecure choices.

It's all fine to point a finger and say there's a problem, but without a solution we are not necessarily further ahead. Thankfully, work has begun on finding ways to design systems to be both usable and secure. Some of these are summarized in Section 4.


next up previous
Next: How can ``good security'' Up: COMP 5407 Project 2: Previous: COMP 5407 Project 2:
Terri 2004-01-05