Archive for the 'Kinect for Windows' Category

April 19, 2013

A short history of Natural User Interfaces

A guest post by Marcus Ghaly

Nissan Kinect for Windows

Kinect for Windows in Action

When computing moved from inputting punch-cards to using keyboards in the 60’s and 70’s, a revolution in computing took place.  Governments, businesses, universities, and research institutes began doing large scale computation, statistical analysis, and built the early networks like the ARPA Net, which would eventually become the Internet and World Wide Web.

When we shifted again from text-only interfaces to Graphical User Interfaces, or GUI’s, in the 80’s a larger change happened.  The mouse, keyboard, and GUI allowed the computer to become accessible to everyone, found its way into the home in the form of a desktop computer, and it eventually became unthinkable to run a business without computers and the massive boost to productivity they brought.

Today we are at the beginning of the next revolution, moving from Graphical User Interfaces (GUI’s) to Natural User Interfaces (NUI’s).  We are a highly mobile, data rich and highly social society, and so the desktop computer no longer serves our needs.  We use mobile phones, tablets, and conference-table sized touch screens to stay connected, meet our needs, and accomplish our goals. But NUI is more than just using your finger to tap buttons instead of clicking a mouse. Technologies like Microsoft Kinect are even able to interpret our body’s movements and gestures, to better suit our needs and lifestyles.

At IdentityMine, when we make Natural User Interfaces, we custom-build experiences to fit our client’s needs, allow users to easily interact with large amounts of information through voice, touch, and body movements and create elegant designs that are inviting and engaging.  We identify where business goals, user needs, and technology overlap, and create experiences that are unique, playful and ultimately feel natural. Interested in seeing what we can do for your business? Contact us or check out some of our past projects.

+ Contact us

+ Watch the video on the Nissan Pathfinder Kinect experience

+ See our portfolio

February 6, 2013

10 Steps for Awesome Speech Recognition User Interface Design: Part 1

Speech recognition engine

The big three! Image: MSDN

Speech Recognition is one of the most important new features and technologies that needs to be considered when focusing on user-centered design. Voice/speech recognition is now mainstream and ranges from frustrating experiences (tried navigating a credit card “customer service” phone tree lately?) to incredibly helpful (have you tried the Kinect’s voice recognition yet?)

Well-designed user experiences are extremely important when it comes to voice recognition, as there is a much higher chance for users attempting to say phrases or words that aren’t recognized or cause errors. This can create a bad user experience and reflect negatively upon your brand. It can also place an additional burden upon live customer support if they are receiving lots of frustrated communications.

Speech applications can also present a very linear experience where users cannot easily backtrack or change their mind after making a choice. Ensuring that dialogue, prompts, and grammar are well-constructed and developed will help make this experience as positive as possible for the user.

Here are the first five steps to help ensure a high quality experience.

1.       Determine Goals and Requirements for the System

Engaging in a careful discovery process on what it will take to make your application truly successful.  The process can help you determine the questions that you need to ask and what needs to be included in the application, in order to decide upon target user groups, functionalities and interactions. (One method is to create personas.)

2.       Choose between Natural Language and Directed Dialog

What you think of first may not be what your neighbor would think of first. With only a limited number of potential actions for your users to take, you want to make sure that their intentions are recognized correctly. While a natural language application creates a more human-like interface experience, it can be a much more complex to design and can carry a higher risk of errors. If the application has a limited scope focusing on a clear set of actions, directed dialogue is often a better choice.

3.       Choose the Application Persona

During the discovery process, make sure to define any brand or personality requirements for the application. Remember that whatever is selected as the voice of your application (and by extension your brand) reflects on you.  So be sure that you are prioritizing usability over novelty. It is also important to consider where and how an application will be used, for example navigating a car GPS system versus ordering takeout, to direct the language and flexibility of the application. Also keep language and cultural differences in mind as well. One size doesn’t fit all.

4.       Map the Voice User Interface (VUI) Structure

Have a plan! After doing your initial rounds of discovery and approach planning, it is time to build some skeleton wireframes. This information can be conveyed well using graphical wireframes and flow charts. This is especially important when determining the fastest way to help the user accomplish their goal.

5.       Finalize the VUI Design

According to UX Magazine, by this point in the process you should have:

  • Clear sets of requirements, goals, and use cases/user stories.
  • A decision on whether or not the application will support natural language.
  • Guidelines about the application’s branding and personality requirements.
  • Skeletal flow charts indicating the basic paths through the application.

The framework for the application is completed now by filling in and refining details. This is the time to circulate your design to any necessary stakeholders and incorporate their feedback. Don’t proceed from this stage until you have identified what each user can do at each specific step in the application.

Stay tuned next week for the final five steps to ensure a high quality experience with your speech recognition user interface design.  IdentityMine helps businesses rethink the way that they communicate with their customers across multiple digital touch points. Interested in learning more about our take on voice interaction? Want to see how we’ve incorporated it with our Windows Phone, Xbox and Kinect applications? We’d love to tell you about it so contact us!

+ Contact Us

+ Read UX Magazine’s suggestions for creating a high quality experience

+ Learn about Kinect Voice Recognition

+ Kinect for Windows Tutorial

+ Cocktail Party Techie Term: Personas

October 22, 2012

Beyond Touch: What’s Next for Computer Interfaces?

Beyond Touch: What’s Next for Computer Interfaces?

Will our bodies be the next device?

Have you ever thought about what’s next?  No, we’re not talking about an afterlife, but rather what will be that next step be that moves us beyond touch interactions when it comes to operating devices.  Michael Keller wrote a very interesting article for Txchnologist.com after talking to some brilliant and inventive minds from Tufts University and Carnagie Mellon’s Human Computer Interaction Institute.  They bring up the Microsoft Kinect which is near and dear to our IdentityMine hearts.  Take a look:

It’s anybody’s guess what our interaction with computers will look like in the coming years. Will we still be poking and pinching tiny touchscreens to sort through party pictures from the previous night? How far off until we see holographic gesture interfaces like Tom Cruise used in Minority Report? And when will we finally retire that ancient crumb-crammed keyboard and dirty fingerprint-flecked mouse?

A variety of ideas for how people will communicate with and through computers have been in the works for years, though only a few have matured beyond the drawing board. But with the conceptually and commercially huge move to touch interfaces on smartphones and tablets, innovators are looking for what’s next.

“In the human-computer interaction community, the general notion we’re working under is reality-based interfaces,” says Dr. Robert Jacob, a Tufts University computer science professor studying brain-computer interfaces. “We’re trying to design what is intuitive in the real world directly into our interaction with computers.”  Read the full article…

Let us know your thoughts after you read it.  What do you think will be the next computer interface method?

 

October 15, 2012

Kinect for Windows SDK Update and What it Means for Developers

Kinect for Windows SDK Update

Kinect for Windows SDK Update is ready to download

If you didn’t get the news on Monday, Microsoft Kinect for Windows released its SDK update and launched the sensor in China. Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments.The full list of features in the SKD update are listed in the Kinect for Windows Blog.  The main benefit is that it gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes extended sensor data access, improved developer tools and greater support for operating systems.

So what does this mean for developers?

The Kinect for Windows Team interviewed Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle about this very question.  The short answer according to Bob is, “because they can do more stuff and then deploy that stuff on multiple operating systems!”

But there’s a lot more to their answer.  The four basic reasons that Peter calls out to push folks to upgrade to the most recent version are:

  1. More sensor data are exposed in this release
  2. It’s easier to use than ever (more samples, more documentation)
  3. There’s more operating system and tool support (including Windows 8, virtual machine support, Microsoft Visual Studio 012 and Microsoft .NET Framework4.5)
  4. And it supports distribution in more geographical locations

Read more of their discussion here.

October 11, 2012

Cocktail Party Techie Term of the Week: BSDF

Cocktail Party "Techie Term" of the WeekWelcome to our new regular blog feature – our Cocktail Party Techie Term of the Week!  Here at IdentityMine we have a lot of brainy folks, in a number of disciplines, who are always hard at work making some of the coolest and most cutting-edge applications on the market.  Sitting amidst all of the brainy action we hear a lot of technical terms tossed about that would make anyone sound uber-cool when dropped at a cocktail party. . . as long as you know what they mean.  Maybe one of these will come in handy for you!

This week’s techie term:  BSDF

BSDF stands for Bidirectional Scattering Distribution Function.  It is a set of algorithms that describe how light interacts with various material types. The BSDF contains all the parameters needed to create many different types of materials, ranging from clear glass to sandblasted glass, plastics, metals and translucent materials such as skin, porcelain, and wax.

How does IdentityMine use BSDF in our work?  A great example is in our Nissan Pathfinder Kinect for Windows project where we painstakingly recreated an entire virtual version of the vehicle to allow people to experience the virtual version of the car long before a real one was available to touch, feel and test drive.  The beauty of the light shining on the glossy surface of the car, the depth and realism of color and highlighting of the Pathfinder’s sleek lines are brought to life using BSDF.

Warning:  There is a Beet Sugar Development Foundation (BSDF).  Be careful not to get the two confused while sipping an Appletini at your next cocktail party.

This week’s term inspiration comes to us courtesy of IdentityMine’s own Howard Schargel our Lead Immersion Architect.  Thanks Howard!