Forget touch-screens, the future of digital signage is going to be gestures.
The concept of gesture technology (in and of itself, but especially related to digital signage) isn’t new. Recent advances in optics (and a dramatic drop in price) have made gesture technology simple and cost-effective. I’ve seen more and more businesses use Microsoft Kinect-based digital interfaces than I have people playing games on them. The truth is that digital signage and gesture technology were made for each other. From a usability perspective, making a gesture is far easier than typing on a touch-screen. From an engagement perspective, gesture technology means larger screens – greater visibility – and, depending on the application, faster results. The ease of use also means more people are likely to interact with the display.
One of the challenges of gesture technology for digital signage is that people don’t like learning new things. Think about it – if you’re in a hurry and need to gather information from a kiosk (at an airport, for example), do you really want to spend time moving your hand up and down and trying to figure out the difference between miming push and pull? Of course not. The reason that airlines use touch screens is because they are familiar. Most people get computers or at least typing something out – and big shiny buttons make going to the next step easier. Interfaces change, processors come and go, but the keyboard and its trusty sidekick the mouse have been part of the PC for at least 30 years. They may now be about to get stern competition, thanks to two gesture-sensing technologies set to drastically reduce the amount of typing and clicking needed to control the average computer. As popular as it’s been, not a lot of people have Microsoft Kinect, so using gestures at a digital display isn’t going to feel intuitive.
However, according to Hal Hodson, writing in New Scientist Magazine, that may all be about to change.
By tracking hand movements precisely, the wrist-mounted prototype of the Digits project, built by a team from Microsoft Research in Cambridge, UK, allows gestures to be communicated in real time to any connected device.
An array of LEDs mounted on a plastic wrist brace facing the palm bounce infrared light off the user’s fingers. A laser shines across the hand to highlight the orientation of the fingers. A camera then reads the reflections, and software builds a model of the moving hand that is accurate to within one hundredth of a centimeter. Project leader David Kim says that Digits was born of the desire for a technology more accurate than the company’s Xbox Kinect gaming sensor. The aim was to track movement without tying the user to any particular device. “We had to use technologies that are small and use less power,” he says. “It shouldn’t interfere with daily activity, and we wanted to enable continuous interaction.”
The device is about the size of two ping pong balls taped together, and currently needs to be tethered to a laptop computer. But Kim plans to shrink it to the size of a wristwatch and make it wireless. In a demonstration today at a symposium on user interface software and technology in Cambridge, Massachusetts, the system was shown controlling video games, smartphones and computers. The Digits system isn’t the first such device. The Leap Motion sensor, from a San Francisco-based company of the same name, sits on a desktop reading a number of different gestures as users wave their hands overhead. The company has not yet released details on how the sensor works. Digits is a “really nice piece of work”, says Thad Starner at the Georgia Institute of Technology in Atlanta, who is also technical lead on Google’s Project Glass.
Digits is in its early stages, says Starner, who has been using a wearable computer for almost 20 years. Nonetheless, he is excited at the potential for pairing sensitive, precise control interfaces with heads-up displays like Google Glass – which looks like a pair of glasses without lenses, and allows users to see data without needing to turn their head – or his own bespoke rig. “You can imagine using really subtle gestures,” he says. “I’d use it in class to pull up notes while I’m teaching.” Starner’s own device feeds information to a display in front of his left eye. During a phone interview with New Scientist, speech-recognition software listening in on the conversation pulled up emails he had exchanged with the magazine in the past. Later, the system pushed a student’s thesis on rapid interactions with electronic devices into his field of view, deeming it relevant to the discussion.
Starner says the real power of Digits will be in continuous recognition – the ability to not only identify standalone commands, such as pressing your thumb and index finger together to skip a track on your iPod – but to interpret hand movements in sequence. Some of his current work involves teaching American Sign Language to children with hearing difficulties using a video game. “If we had finger-tracking wristwatches they could put on and play the game, we could look at how their fingers move through time, and give them feedback,” he says. “That would be really beneficial.”
Adding wearable computing to the arsenal of human-computer interfaces represents “a symbiosis of man and machine that we haven’t seen before”, Starner says. “Having access to data on a split-second basis makes you more powerful, more in control of your life. This is going to get us to the stage where we use systems without thinking.”
# # #
We build strategies and everything that goes with them.
Some of the largest organizations in the world, including many in the mortgage and finance industries, trust us with the most important aspects of their business. From defining clients’ brands and identities to developing ongoing campaigns in a variety of media, we provide the communications and measurement tools to move them forward. Applying our experience and dedication to the media and the message, bloomfield knoble handles every detail of our clients’ strategic marketing initiatives.