IdentityMine

| Tags: Stuart Mayhew

Get Moving.

Next consideration is to the interactive experience that best suits a Kinect device. The Kinect camera, at this point, is not as sensitive to body detail as you might expect. It cannot pick up, for example, individual fingers and facial details without some modifications or custom work. That is not to say it cannot be done or isn’t being worked on by developers, but the out-of-the-box experience is much more about skeletal tracking. Below shows an example of what the Kinect sees and interprets to make the interactions; it is basically a skeleton stick figure.

This means that the interactions are all about moving parts of your body, hence the warning about space for people to move. The motion tracking is very fast, with almost no delay. The real issues arise in what gestures work best and are most easily picked up by the camera.

I have found that big exaggerated gestures are easiest to learn and use for both the camera and the user; like swiping your arm from far right to left (or vice-versa), raising your arm in front of you, lifting your legs and leaning all work well. The user will really benefit from short animated tutorials of the experience or some clue as to what the interaction needs to be; for instance, displaying arrows to dictate motion. Things get a little difficult when you try spinning around or standing sideways, as the camera tends to get confused when parts of the body overlap. It can make the skeletal tracking jump around and get a little confused. We have found that we can make this harder interaction work by placing the camera off center, but this starts using more space and can make the arrangement more awkward for a tight retail environment. We even worked out a solution using two cameras Detroit and New York auto shows where space wasn’t an issue.

With that in mind, the interactions for Kinect are best for big interactions, versus very detailed and intricate manipulations. At this time, we are not quite at the level that “Minority Report” has filled our heads with, but we are on our way there now. Still, it is possible to come up with some neat interactions and designs, even with these limitations. It is important to keep in mind is that many people will complete the same action differently. Ask anyone to swipe in front of them and see how many different ways people do this action. Every interaction should be considerate of individual interpretation. Plan for the gesture to not be recognized the first time by the device, and make sure the user feels compelled to try again and minimizing frustration.

So now that we understand the basic requirements for Kinect, the really interesting phase is how we might use this platform to improve a retail experience for consumers. Where does Kinect find a natural fit?

 

Kinect with Consumers

Kinect was designed to be used as an entertainment device for the Xbox, and that is where it excels in its experiences. The experiences that consider its origins seem to perform best. People like moving around and it makes them generally happy to do so, hence its great success in the gaming ecosystem.

However, people don’t like to look like fools in front of others, and nothing is going to make you look more foolish than the wrong experience, in the wrong environment, at the wrong time. This means that this device and its interactions can create amazing marketing and in-store experiences, so you must choose the experience carefully. Consideration needs to be given to the consumers you are targeting, where you position this interaction, and what your goal is in getting people to complete the interaction. For example, children are happy to jump about and create energetic interactions with Kinect; however older users are less inclined to do so. The most careful consideration should be made while defining what supporting role a Kinect can play within a retail environment. Generally, Kinect is used as an extension of a marketing campaign or as a concentrated interactive experience. I wouldn’t recommend thinking of the Kinect experience like you might a touchscreen kiosk which is generally used to browse products and filter results.

Kinect is often setup with a large display and it is important to remember that people will be 6-8 feet from the screen. This is to allow the camera to fully track the user’s gestures, so the content needs to work at a distance. I would say that the bigger the display the better because it allows others to notice the experience from across the store. It also gives a wonderful sense of user control for moving larger than life objects and boosts the entertainment factor. This means you need to think more like a billboard designer than a touch screen developer. Content needs to be big and have large target zones for interactions to be properly tracked; it is a little like designing for mobile and finger interactions, but scaled up. Density of buttons should be very light and spread out. The optimal layout would be 10 or less items on the screen at any given time, and remember, people are not going to be reading product descriptions at this size. Leave descriptions for the touch kiosks and mobile devices.

Something else I should add is that Kinect also detects voice commands and can be setup to recognize certain key-phrases. Voice activation and use can be difficult in noisy environments such as large retail stores. Of course, if your retail environment is quiet or more exclusive, this can be an alternative way to interact with the device.

Stay tuned for the last part of this three part series. Be sure to visit our Associate Creative Director's Blog as it is full of thoughts on brand, design, technology, etc.

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInPin on PinterestShare on RedditShare on TumblrEmail this to someoneDigg thisFlattr the authorShare on StumbleUpon

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>