Last Thursday, I joined other local TriUPA members and listened to Josh Clark‘s webinar titled “Buttons are a Hack”. He ostensibly was trying to get us to do more with gestures and less with traditional windows controls, menus, and buttons in designing the interfaces for mobile devices. It’s not just the smaller size of screen that is motivating the change:
he really wants us to give users more direct interaction with the content. His webinar got a bunch of us thinking about trends in user interface design.
Call for Gestures
As an old guy who was just getting used to the existing computer interface conventions that were based on metaphors – folder, desktops, etc. – I found it refreshing but a bit scary. But, after all, a tablet (or other mobile device with appreciable screen estate) is somewhere between paper and computer, between phone and game console. With a touch screen, it deserves some rethinking of the user interaction.
So we are being advised to throw away all the silly stuff that we accept as convention; all those desktop metaphor controls are indirect and they are separating the user from the content and from their primitive bodily movements that have developed over thousands of years of evolution. Or at least, that’s what I heard.
Josh offered some pithy axioms:
- “Gestures are the keyboard shortcut of touch.”
- “Content is the control.”
- “Information is the interface.”
This sounds like a good starting point. Well, okay, I can see the direction this is going for mobile devices and touch screens. More thought is going to have to go into the design. He referred to LukeW.com for a reference guide of touch gestures.
Which gestures could be considered general or universal? You have swipe, tap, tap & hold, pinch, spread, etc. We could consider several finger touch (as some games do) but we may run into accessibility issues. We need to reduce the impact of Fitt’s law where at all possible. But we need to adopt some conventions so users don’t have to relearn the interface for each application.
After some discussion on how we could discover gestures and how a user could learn gestures, he made the point that users will need to learn as they go. So as designers, we will have to introduce learning to the user without being in their face. He suggested we all look at how a player learns video games as a great example. With a more interactive interface we should follow these three ideas from gaming:
- Coaching – prompt the user if they look lost
- Leveling Up – introduce easy stuff at early levels and save complicated stuff for the expert
- Power Ups – provide shortcuts for expert users and ways to advance
I would like to add two additional ideas from gaming that came from a discussion with a colleague:
- managing inventory – let the interface save and organize your information
- key or reference – transitioning between rooms or levels there is always some map of terrain or point of reference, some metadata to help
I would go further with the gamification insights, but I wonder if they exhaust the possibilities for learning. Most of the examples he gave were for games where I play against the computer or against other players but by myself. This still doesn’t handle collaboration and the social aspects that are needed for the growing amount of collaborative work. It doesn’t encourage or train you to work with others, even if it does provide incentives. As a rudimentary example, think of the skills you needed to drive on the highway. Part of the skill is controlling the car, but part of the skill is getting the right space between your car and the other cars; yielding or taking the right of way; keeping up with traffic. This is more than just learning the road signs and how to drive your car by yourself. It means learning how to work with others so that everyone arrives safely. Let’s see if the user interface can help with that as well.
For more, on Twitter, search #uievs (for UIE virtual seminars).