IdentityMine has been experimenting with the best approaches to produce a good User Experience (UX) on Kinect – specifically ways that users might interact with a software application GUI, as opposed to playing games. The software development industry is enthusiastically grappling with this issue. This is the second post of a 5-post series that will dive into User Interface considerations when developing a software applications using Natural User Interface (NUI) such as gesture with Kinect sensors.
Our goal is to design a motion-based interaction system that’s easy to pick up and use, has a short learning curve, and allows users to quickly navigate relatively complex UI.
It's important to understand that gestures have zero meaning by themselves. When users swipe their hand, or pantomime poking, pushing, or pulling motions, they actually imagine themselves physically interacting with what they see on screen. If your software application doesn’t reflect what happens in the users' imagination, the metaphor breaks and users struggle. Therefore what was once flashy and cool in UX has become an absolute zero-compromise requirement: animation. Not just animation in response to a gesture, but animation in anticipation of it. If the NUI application can respond to a swipe gesture, and it can anticipate that a user might be about to make a gesture, it should animate to suggest that.
Suppose you develop a Kinect-enabled e-reader. To turn the page, users just swipe their hand right to left. The user needs to know that they’re doing the gesture fast enough, large enough, straight enough, etc. to be properly recognized. What do you do? I would put an arrow, or an upturned page corner on the right side of the visible page. When the user makes a faulty gesture that resembles the appropriate swipe gesture, the arrow/page would animate halfway, as if to say “You almost got it! A little more and I’ll go turn the page!” The animation actually teaches the user how to interact with the application. The user may have to try a couple things the first time, but it shouldn’t take long to figure it out.
You shouldn't use discrete gestures like this too frequently. Proper animation does help, but more fluidity in the interpretation of motion will create a better overall experience than the "you-did-it-or-you-didn’t" approach of gestures. Emphasize the use of scrollable regions, sliders, dragging… anything that isn’t a discrete yes/no action. This makes interaction feel much more natural. The problem, which I will attempt to address in future posts on this topic, is in starting and stopping such actions, since both of those are discrete and need to be somehow identifiable.
Picture a vertical list box that allows a single selection, and assume the user is already ‘focused’ on it. A nice way of selecting an item this way is to envision the list as a turnstile, with a marker in the middle indicating which item will be selected, like the big wheel on The Price is Right. To select an item, all I have to do is move my hand up/down, and the list will scroll right along with it. When my selection is properly positioned, moving my hand right/left, or even in/out, can finalize that selection. Like all gestures, that finalization gesture should be animated in a helpful, suggestive way.
In the next installment of this series, we will focus on the Curser. You can also follow my personal blog here!