Valorem Consulting

| Tags: Evan Lang, Uncategorized

IdentityMine has been experimenting with the best approaches to produce a good User Experience (UX) on Kinect – specifically ways that users might interact with a software application GUI, as opposed to playing games. The software development industry is enthusiastically grappling with this issue.

This is the third post of a 5-post series that will dive into User Interface considerations when developing a software applications using gesture and Kinect sensors.

Gesturing with the Kinect is similar to using a keyboard; signals are sent to an application. But how does the application know what to do with them? If a user has two text boxes, keyboard input is sent to the control with focus. The same is true with gesturing. To focus on a particular control, just move the cursor over it; to help a user identify which controls they can interact with, the cursor should be ‘attracted’ to such controls (like a magnet) so they are easier to identify and navigate to. The control should animate or otherwise indicate that it does indeed have focus.

We’ve been experimenting with two ways to use the cursor with gesture. In the first experiment we physically pointed at a control, arm fully extended. Unfortunately, pointing is too subjective (more so than you might think), and the Kinect sensor is too imprecise for simple pointing to be practical. So we modified the experiment to function more like a Wii controller, where the direction of the point is a suggestion rather than a requirement.  The Wii cursor is rarely positioned precisely where the controller is pointing, but it doesn’t really matter; as long as the user can see the cursor they can move it by moving the controller, and that’s what matters. The same applies with the Kinect and pointing.

Pointing is effective because it doesn’t get confused with other gestures. While the user's arm is fully extended, the application recognizes that the only thing the user is doing is positioning the cursor. The user is not going to accidentally turn a page on an e-reader, for example, by pointing too fast. Pointing does have a drawback, however. In the world of touchscreen-based NUI, the term ‘gorilla arm’ refers to the pain and fatigue users experience after using a vertical touchscreen for more than a few minutes. Inflicting pain on users is bad UX, and pointing risks causing a lot of gorilla arm.

So how can we move a cursor, independent of gesturing, without hurting people? One idea comes from the realization that users have two hands (let's set aside issues of accessibility for this discussion, since Kinect's motion is actually antithetical to accessibility.) A user could use one hand for cursor movement without pointing, and the other for gesturing. There are several questions about this approach; Is it possible to accurately recognize when the user is done positioning the cursor and wants to rest their arm without the cursor moving? Is it necessary to hold the cursor stationary during that time? Will cursor movement and gesturing conflict with each other or require users to bang their arms together? Can lefties and righties have the same dominant-hand experience? We're looking into all of these issues, and hope to have answers soon (stay tuned.)

In the next installment of this series, we will focus on Buttons.  You can also follow my personal blog here!

Part 1: Introduction

Part 2: Gestures

Part 4: Buttons

Part 5: Multiuser Scenarios

2018 IdentityMine, Inc. Privacy Policy