Listen, I’ve got some ink on my body but even I recognize that offering people free tattoos is going to lead to some regrettable decisions. In User eXperience design, we have a similar problem with the idea of “affordances”: using them isn’t going to end well.
Jared M. Spool defines “intuitive” as the gap between what knowledge the user has and what knowledge the application requires the user to have. Where there is a large gap the application can be made “intuitive” (or, at least, usable) by providing the necessary training and support to help the user bridge the gap. As I’ve discussed elsewhere, part of narrowing that gap is delivering an application that works the way the user expects it to (i.e. the UX matches the user’s mental model). But there’s another component of an “intuitive” design that we also discuss in Learning Tree’s User Experience (UX) Design for Successful Software course – creating a series of User Interfaces that the user automatically knows how to use as soon as they see it.
You see, in a way, everyone is a UI/UX expert. We all spend so much time interacting with software that we’ve all become sophisticated users. We recognize that some applications just “work” or “make sense” to us…and others do not. More importantly, we all spend so much time successfully working with software that if we can’t make an application work, we know it’s because the application is leading us into error. As we point out in Learning Tree’s User Experience (UX) Design for Successful Software course, it’s always the UI’s fault.
However, as users, we often can’t articulate what’s going wrong in a UI that lead us into error: We describe some UIs as “intuitive” and others as “non-intuitive”…which, without a definition of “intuitive,” just begs the question of what’s going wrong. It’s a kind of circular reasoning – “I can’t get it to work because it’s not intuitive. I know it’s not intuitive because I can’t get it to work.”
What makes us as UX designers, different from the rest of the user community? Presumably, we can give a meaning to the word “intuitive” and, furthermore, use that meaning during the design phase of an application. As designers, we can identify problems in the UIs before we build them and identify alternatives that will allow us to reliably build “intuitive” UIs.
Let me head off one version of “intuitive” right now. I’m not suggesting that when creating an electronic shopping mall that you create something that looks, on the screen, like a shopping mall. The skills that a user needs to navigate a physical shopping mall do not translate into navigating a virtual shopping mall. Besides, it’s a dumb idea. Presumably, we’re creating a virtual shopping mall because it works differently from the real thing. If users wanted a real shopping mall, they’d go to a real shopping mall. But it’s these kind of regrettable user interfaces that the idea of “affordances” lead us into.
What I’m dismissing here is the idea that “intuitive” depends on “affordances,” at least if you use “affordances” to mean translating skills from the physical world (I’ll call these “physical affordances”) into the virtual world. Affordances first appeared as a term in psychology in the late 70s (in the work of J.J. Gibson) and were refined by Donald Norman (the author of “The Psychology of Everyday Things”) in 1988. Norman pointed out that only perceived affordances matter: what the user notes and understands. Andrew Maier has written a fascinating history of the concept.
If you read Maier’s article (and it is a good one) he has a theoretical example of affordances in action – which I completely disagree with. Taking a toothbrush as an example, he claims that its shape and design would necessarily lead someone unfamiliar with toothbrushes to use it to clean their teeth. You’re welcome to read his description and see if you find it convincing. For me, the leap of faith is in the sentence where the user looks at the bristles and deduces that he should put the thing in his mouth.
There is plenty of anecdotal information that no one, seeing a toothbrush for the first time, has ever figured out what they were supposed to do with it, without being shown. People must first have several ideas: Their teeth need cleaning, there are tools for that, this is one of those tools. Only after acquiring that mental model of the problem will a human being begin to take advantage of the affordances Maier discusses. In other words, the user must first have the appropriate mental model before the affordances make sense.
In UX, the idea of affordances takes that one step further, it’s the idea that a user will see something in the UI that looks like something in real life and, as a result, know what to do with the UI. That’s not going to happen because, beyond the problems I’ve pointed out with the toothbrush example, the thing on the screen is two dimensional, can not be grasped, and must be manipulated with a mouse or keyboard. The mental models used to manipulate the physical world are not the models used in the virtual world.
That’s not to say that affordances is a completely stupid idea. There are “virtual affordances” which do work, as I’ll discuss in my next post.