Forget the mouse and keyboard, and even the swipe, pinch and touch – the next generation of human-computer interactions will be the gesture, the body movement and even thoughts from the human brain. What we’re experiencing today is nothing less than a revolution in the human-computer interface, driven by a convergence of gaming systems, computers and hand-held mobile devices. In hindsight, we may look back on the past 30 years of computing history and see a linear narrative of mankind’s attempts to create the perfect natural user interface.
The concept of the “natural user interface” is based on the notion that the interface between human and computer should be as invisible as possible. With the introduction of touch screen-based computing into the mainstream, we’re already moving toward a better, more intuitive computing interface. (Ever notice how many people now treat ATM screens as if they are iPad-like touch screens?) In a recent survey of five technologies that will transform business, Razorfish firmly located the Natural User Interface as the next evolution of what we’ve already experienced with tablet computing. The iPad, in short, has prepared us for the next iteration of computer interface: “The technologies going beyond the touch experience are sufficiently advanced and they can feel like magic.”
In fact, the really cool stuff starts to happen once you start to think like a child. We’re now seeing the development of new types of interfaces that have their genesis in gaming platforms like Microsoft’s hands-free Kinect for the Xbox 360 gaming platform. Released to much fanfare at the end of 2010 (and subsequently hacked to popular acclaim), the Kinect uses cameras and infrared to recognize enough of your actions to enable full gesture recognition without needing a controller or touching a screen. As Razorfish points out, “much like the original iPhone brought touch interaction into the mainstream by putting millions of devices in the hands of consumers, Xbox Kinect will do the same for gesture control. Imagine being able to virtually try on clothes from the comfort of your own home. Or order a pizza with a flick of the wrist from the comfort of your couch.”
The natural user interface could lead to fundamentally new types of interactive experiences – such as filling out, say, tax forms using hand gestures and spoken commands rather than using a physical keyboard and mouse. Companies like Toshiba and Mercedes are working on new computer interfaces that integrate facial recognition and mood recognition to complement certain gestures. The natural user interface is indeed “natural” – it is meant to encourage intuitive actions that mimic real-world experiences and break down the wall between “expert” and “novice.”
So what’s next? If you buy into the concept of The Singularity, then it’s the seamless integration of human and computer, in which the human-computer interface is the human body itself. This may be a lot closer than you think, considering the various ways that people are willing to embed digital gadgets inside their body. Getting information from the Web may one day be so easy that it no longer requires typing a string of words into a search box – all it requires a simple gesture, and information anywhere in the world is yours for the asking. To paraprase Brooke Shields, nothing will come between you and your computer.