IdentityMine

| Tags: Evan Lang, Tips, Uncategorized

IdentityMine has been experimenting with the best approaches to produce a good User Experience (UX) on Kinect – specifically ways that users might interact with a software application GUI, as opposed to playing games. The software development industry is enthusiastically grappling with this issue.

This is the fifth post of a 5-post series that will dive into User Interface considerations when developing a software applications using gesture and Kinect sensors.

Multiuser Scenarios

The Kinect can reasonably support up to 4 users simultaneously, if personal boundary issues aren’t a big concern. Of course, most of the issues regarding multiuser applications will be specific to the application’s design and intended purpose - as far as general motion is concerned, not much changes. The application will need to keep track of and support a cursor and focus model for each user independently. Users can be easily distinguished from each other by assigning each a color and displaying it prominently in their cursor and in highlights displayed when a user focuses on a specific control.

What happens when multiple users try to interact with the same element on the screen? It’s possible that in unique circumstances, the combination of input from each user could be achieved, but is that realistic? Most often an element should only be controllable by one user. We achieve this only by allowing one user to be focused on a particular control at a time. A control’s focus is available on a first-come, first-serve basis, so if two people want to scroll through a single list box, the first one to use their cursor to focus on it, gets to control it. When the user loses focus, the other user is free to take over.

This introduces a problem of “stickiness” in controls. When one user interacts with a control, but doesn’t do much else afterward, their focus will remain on that control since they haven’t focused on anything new; this prevents others from using the control. Therefore, it’s best to have the control automatically drop focus if the user controlling it is inactive for a specified period of time.

Conclusion

Many of the ideas in this five-part series are still works in progress; but if they are considered together and continually tested for usability and performance, a very easy-to-use natural, reusable interaction system for applications can be developed. Such applications would be useful for controlling home entertainment systems, presentations, interactive advertising in public spaces, and surely many others as people are increasingly inspired by the possibilities of this new technology.

IdentityMine continues to pave the way with innovative User Experiences and software development focused on Natural User Interface (NUI). We'd love to hear from you and see examples of the experiments that you are conducting. Send them our way, and we may post them on our site!

 

Part 1: Introduction

Part 2: Gestures

Part 3: Cursor

Part 4: Buttons

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInPin on PinterestShare on RedditShare on TumblrEmail this to someoneDigg thisFlattr the authorShare on StumbleUpon

Trackbacks/Pingbacks

  1.  Dew Drop – May 10, 2011 | Alvin Ashcraft's Morning Dew

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>