Touchless interaction â why the future is about letting go: part 2/2
21 Nov 2013
Part 2/2
3. Cameras and motion sensors
Another opportunity for touchless interaction is by using imaging sensors such as a camera to interpret the world around the device. If a device can ‘see’, then it can offer new modes of interaction, such as physical gestures, motion tracking and facial recognition.
Facial recognition can be used as a security feature, eliminating the need for passwords and making unlocking a phone faster and more natural – just look at your phone and you can use it, no need for a passcode (although both this and Touch ID fingerprint scanners have been shown to be vulnerable to hackers). Samsung released a myriad of (albeit gimmicky) hands-free interactions with the Galaxy S4, including: ‘Eye Scroll’ (eye-tracking technology that allows the user to scroll through a page just by moving their eyes), ‘Tilt Scroll’ (scroll a page by tilting the device), and ‘Air Gestures’ (control the phone using hand gestures). While playing a video on the device, looking away from the screen can automatically pause the video you are watching, then resume it when you look back.
In the living room, Smart TVs are getting smarter by offering a whole host of touchless interactions. No longer do you need to hunt for the remote, instead the built-in camera recognises your face and logs you in to your profile (giving you instant access to your favourite content). Hand gestures can then control TV functions by swiping to navigate and grabbing to select, finally a voice command turns your TV off when you are done. It might sound like something from Tomorrow’s World, but amazingly it’s all possible from a £500 TV you can buy on the high street today.
Much of this technology was first brought into our homes by games consoles such as the Nintendo Wii or Microsoft Kinect for Xbox 360. The Microsoft Kinect features a combination of 3D depth sensors, RGB camera, and a multi-array mic, which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. These allow the user to control and interact with it without touching a games controller. It was a game-changer that has brought touchless interaction into 24 million living rooms.
The Kinect hasn’t just been great for the living room however, its software development kit (SDK) has allowed programmers to use the Kinect’s sensors for a wide variety of scenarios outside home entertainment. ‘Kinect Hacks’ are posted almost daily on the internet, and range from art installations, music videos, controlling a robot and live musical performance, to vascular surgery. A thriving community like the Kinect ‘hacking’ community helps showcase the possibilities of touchless interaction.
PrimeSense, the makers of the Kinect’s motion capture sensors, released an interesting video(warning: very cheesy) to showcase Capri, their next generation of 3D sensors, which they claim will bring precise, fast motion tracking to everything – from laptops and TVs to elevators, robots and appliances everywhere. It looks like ‘Kinect Hacks’ are going mainstream.
Leap Motion is a 3D motion control device that purports to have 200 times greater sensitivity than Kinect. It’s a tiny device that can be plugged into any computer with a USB connection and enables touchless gesture interactions on the desktop, similar to the Kinect. We recently got one at The Real Adventure – it’s amazingly accurate and intuitive, and has real applications (beyond the initial sensation that you are Harry Potter or in Minority Report) and its own ‘app store’ full of tools and games to play around with. Autodesk has created a plugin so that you can use the Leap Motion to control Maya, a piece of industry-standard 3D graphics and animation software. By using the Leap’s sophisticated 3D sensors, you can manipulate 3D shapes on the screen using your hands, in a way reminiscent of a traditional model maker or sculptor – a natural interaction with all the benefits of using computer software (the ‘undo’ button being one).
Other motion controllers include: The MYO armband, which lets you use the electrical activity in your muscles to wirelessly control your computer and other devices. Haptix is a sensor that turns any flat surface into a 3D multitouch surface. Although not technically a touchless interaction, it’s another example of how we are moving away from traditional screens.
By giving a computer ‘eyes’, cameras and other motion sensors offer new types of interactions, from face recognition to gesture control. They open up the opportunity of interaction in new spaces, and provide a truly personalised experience. The next step is likely to be a better understanding of context from the computer, by understanding who you are, where you are and what you are doing, and maybe even the emotions you are experiencing, computers will be able to use this contextual information to automatically adapt to our needs.
4. Bluetooth and NFC
Wireless communication technologies such as Bluetooth and Near Field Communication (NFC) allow for interactions based on proximity and location. For a long time now, we’ve been promised both jetpacks and contactless payments through our mobile phones. Like the personal jetpack, it’s never quite happened – but it’s close. Whereas countries like Japan have been embracing NFC-based payments for years, it’s never quite caught on in the West, partly because Apple has never incorporated NFC into the iPhone. But now, Bluetooth low energy (BLE) is being touted as the next big thing for contactless payments. By using BLE beacon transmitters and geo-fencing, consumers can be identified, along with their precise location.
Marketed by Apple as iBeacon, BLE chips ship in most modern phones, so the support should be much wider than NFC, and it might just change the way we do commerce on the high street. Instead of queuing at the till to pay, the future may see our accounts automatically getting debited as we leave the store, and loyalty rewards offered based on our location history. Many see commerce without tills as the ultimate touchless interaction. Walk into a shop, order a sandwich, eat, walk out. No fiddling around with card machines and long queues for the checkout, as PayPal’s new Beacon solution demonstrates.
By DMA guest blogger James Reece, User Experience Specialist, The Real Adventure
This is an edited version of a blog that first appeared on The Real Adventure
Please login to comment.
Comments