IdentityMine

| Tags: Evan Lang

IdentityMine has been experimenting with the best approaches to produce a good User Experience (UX) on Kinect - specifically ways that users might interact with a software application GUI, as opposed to playing games. The software development industry is enthusiastically grappling with this issue.  Last week about 2500 software developers got a boost from Microsoft when they distributed free Kinect sensors to every attendee at the MIX11 conference in Las Vegas.The Kinect gaming experience is pure Natural User Interface (NUI), relying on voice and gesture to interact with the application.

This is the first post of a 5-post series that will dive into User Interface considerations when developing a software applications using gesture and Kinect sensors.

The Kinect is designed, marketed, and sold for gaming. Of course, every game needs a UI to get the user into the game, and there’s the Xbox shell itself. But gaming UIs are dramatically simplified compared to many others because users are not expected to sit and use them for anything beyond personal entertainment.

EXAMPLE

On the Kinect, a typical action used to activate something (such as a button) is to move the cursor over the item to be activated, and then hold it for a couple seconds. The hold-and-wait approach works well for game UIs, because there’s only a few movements between when the application is launched and the game begins. But imagine a normal line-of-business gesture-driven application, where the UI is all there is. Would the user constantly need to hover their hand over buttons and wait for something to happen? That could get old fast.

On the other hand, we’re seeing several third party gesture recognition libraries being released on the PC, thanks to the Kinect’s open architecture.  The great thing about gestures is that once you learn them, and learn how to use the application, you can send more complex signals to the application more quickly. The hurdle is learning to perform the gestures appropriately within the context of the application.

The Kinect includes voice recognition, which is a powerful component of the experience. However using voice to navigate a gesture experience is similar to using a keyboard in comparison to a mouse: powerful, but for UI navigation it shouldn’t be a requirement (as evidenced here http://www.industrygamers.com/news/wii-dominates-living-room-space-xbox-in-the-bedroom/). You should be able to do everything with motion, in the same way that you should be able to do everything with a mouse (unless the goal is to actually write text), though voice/keyboard can offer great shortcuts.

In the next installment of this series, we will focus on Gestures.  You can also follow my personal blog here!

Read the rest of the series:

Part 2: Gestures

Part 3: Cursor

Part 4: Buttons

Part 5: Multiuser Scenarios

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInPin on PinterestShare on RedditShare on TumblrEmail this to someoneDigg thisFlattr the authorShare on StumbleUpon

Trackbacks/Pingbacks

  1.  Dew Drop – April 21, 2011 | Alvin Ashcraft's Morning Dew

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>