You’ve probably never noticed the really great UIs you’ve worked with or the fabulous UXs they were part of. Why? Because the only reason you’d notice a UI is when something has gone horribly wrong. In fact, you’re already working with a bunch of really great UIs that do your bidding without you doing anything, let alone noticing them.
I’ve always said that, from the users’ point of view, the ultimate user interface is a single button that says “Just Take Care of It.” That may sound silly, but, if you think about it you only need to know two things to implement that user experience:
With all of the information available to the application, then my “Just Take Care of It” button doesn’t sound all that silly. And, with this UI, the UX would consist of user clicking the button to trigger the activity.
In fact, if your application knew enough about the user and the scenario, the user might not have even have to press the button — the application would just take action on its own. If that sounds like science fiction, it’s not really. It’s the way, for example, that your furnace and your central air conditioning systems work: Both systems know enough about what you want that (most of time) they do what’s necessary to keep your living space at the temperature you want without you doing anything. It’s also the way that cruise control works on your car, adjusting speed as you go up and down hills, to keep you at your desired speed (which is just slightly over the speed limit, I assume).
It’s also just about the way that my current document scanner works. When I bought my first scanner, my goal was to get a clean, legible scan. The application that came with my scanner had a UI that gave me about a bazillion options to set (as I remember — it’s been a while and it’s possible that there weren’t all that many options). All of those options related to setting the internal state of the scanner and were related to my goals only indirectly. The options, for example, addressed issues like “Apply Unsharp Mask” and “Histogram Settings”. What I wanted were options like “Improve legibility of text,” “Eliminate grunge,” and “Make it brighter without losing contrast.”
I eventually got a second scanner and the software that came with it was much better: It gave me exactly two options (still one option too many compared to the “Just Take Care of It” solution, you’ll notice). My two options were “Scan a picture” and “Scan a document.” There was, also, a Settings menu choice that let me access all the options my first scanner had but, to the best of my knowledge, I don’t ever remember using that menu choice.
My latest scanner gives me just one option (not counting the Settings option): “Scan.” This new scanner seems to do two passes over whatever I’ve put under the lid: the first pass determines the type of document to scan and the second passes produce my image file…but, quite frankly, I don’t care. We’re now down to one button.
In an ideal world, that button would go away also. Instead of having me click a button, the scanner would turn itself on when I lifted the lid and, when I closed the lid, call up my computer and trigger a scan (perhaps after checking that I had changed whatever was lying under the lid).
In a very real sense, then, the ideal user interface is one that disappears — the application just does what you want to do. The option of the “disappearing UI” becomes more achievable as the amount of information available to the application increases. My smartphone, for example, uses a combination of the information it gets from its motion sensor, GPS, Bluetooth, and wireless input to decide if it needs me to unlock the phone before using it. If the phone thinks that that the only place it’s been is in and out of my pocket (motion sensor), if it’s hooked up to some device I own (Bluetooth), or if I’m in one of a set of listed geographical locations (GPS/wireless) then the phone doesn’t require me to unlock it to use it. You’ll have to decide whether this is too insecure for you, of course, which is why the Settings button is still present, even if the rest of the UI has disappeared.
Of course, in UX design we try to gather enough information about the user and the user’s scenarios in advance to enable making these kinds of decisions. But, for most of the applications we build, the user needs to be more involved in the process — clicking buttons, entering information, making menu choices, and so on. But, given that the ultimate user experience is, in fact, “no experience at all” what is the goal that we’re striving for when we do a usability test? We answer is that we don’t, in fact, want to finish the test with the user saying “Wow! What a great UI!”
This insight isn’t much help when you’re designing a user experience, of course. (which is why we have a three day course at Learning Tree: UX/UI Design for Successful Software) but it does makes usability testing very easy. You should never, for example, see the user’s mouse drifting back and forth across the screen as the user tries to figure out what the appropriate next action should be because the user should always know what to do next and do it automatically. If, at the end of usability testing, you ask the user “What do you think of the UI?” the answer should be something along the line of “What? The UI? Oh, it was fine.”
In the UX/UI course, we refer to creating a ‘satisfying UI’ and discuss how to achieve that by leveraging prospective memory, taking advantage of what you can learn about your users’ personas/scenarios, and integrating known design patterns. But it really all comes down to one thing: If you notice the UI you’re working with then it isn’t very good. You never notice the good ones.