IdentityMine has been experimenting with the best approaches to produce a good User Experience (UX) on Kinect – specifically ways that users might interact with a software application GUI, as opposed to playing games. The software development industry is enthusiastically grappling with this issue.
This is the fourth post of a 5-post series that will dive into User Interface considerations when developing a software applications using gesture and Kinect sensors.
Buttons are ubiquitous throughout computing. In many ways, they are the fundamental GUI control. The metaphor works with a mouse: clicking a mouse button translates directly to clicking the virtual button. The metaphor works with a touchscreen: pressing the virtual button works like pressing any real button.
The metaphor doesn't work with the Kinect; in fact, it completely breaks. There is no machinery in front of the user to press anything with. It is possible to recognize a gesture that looks sort of like the user is pressing an invisible, midair button, but users have different ideas of just what that action is supposed to look like. Some users press ‘down’ (moving their hand towards the screen) and then quickly back, like tapping someone on the shoulder. Others are more deliberate, and move their hand in, wait, then move it back out. Still others will only move their hand towards the screen, and leave it there before letting their hand fall to rest (like they’re punching something). Finally, many are familiar with the way Xbox UI works, and will simply hold still over the button expecting it to eventually activate. What’s clear is that the metaphor is too far gone to be of much use.
This represents a major departure from traditional UX design. Simply put, the use of buttons needs to be greatly reduced when we talk about motion interaction UI. Instead, special controls with specific purposes will replace many of the common uses for buttons. Time and experience will reveal a standard set of controls that can cover their traditional functionality. Buttons will still be valuable, using the Xbox’s hover-over approach, but must always be weighed against the usability impact of busy-waiting. For games it’s not really an issue, since the UI is inherently simple; but in applications that call for a complex UI, busy-waiting for buttons to activate will quickly become frustrating.
Here are some simple ideas for controls that would work in common scenarios where you might otherwise use a button.
A common scenario asks the user to either confirm or deny something. Usually this is represented by two Yes/No or OK/Cancel buttons. Instead, the user could interact with a Kinect control that—while it’s in focus—includes a slider-like visual that tracks with their hand. Slide it to the left to deny, to the right to confirm.
Another common use for a button is to navigate in a dialogue-type scenario. With the Kinect, you could use something that looks like a joystick; while focused, swipe to the left, right, up, down, or whatever direction makes sense to go forward/back/finish/cancel.
Checkboxes are just buttons with a special style, so they don’t work as-is. But if you change the metaphor to an on/off switch, the rest solves itself.
In the next installment of this series, we will focus on Multiuser Scenarios. You can also follow my personal blog here!