Tuesday, July 20, 2010

Hello surface computing, goodbye keyboards and monitors


Picture the cellphone of the future. It is merely the size of a coin and has no keyboard and no screen. But you won’t need a keyboard or screen because everything is projected onto the nearest surface (which could even be your hand). No more worrying about tiny screens or buttons you can only press with a pin.


Welcome to the world of surface computing, where the computer interacts with its user through almost any surface. This exciting technology all started with the touch screen, which has been around since the 1970s but only became popular around a decade ago and became mainstream with the iPhone in 2007 with its multi-touch screen. By 2012 there will be around 20 million gadgets being used with touch screens, making this technology common place. Indeed, touch screens are becoming increasingly sophisticated - the latest versions can even detect when you blow on them (Microsoft’s Future Phone is the best example of this).




Surface computing

But touch screens are so last decade. The next step is to ditch the old pressure sensitive screens and introduce gesture-based computing. Microsoft unleashed this technology on the world in 2007 with its Surface computer. It consists of a large reflective surface with a projector underneath and several cameras that detect reflections of infrared light from objects and fingertips on its surface. Many people can use it at the same time to drag, resize and manipulate objects - kind of like having several mouse cursors. Not only can it recognise fingertips but it can also recognise objects - so, for instance, you can drop your Bluetooth enabled camera onto the screen to upload your photos. Or if you’re in a restaurant you can put down your bottle of wine and Surface will recognise it and recommend similar wines. Microsoft Surface was unveiled in May 2007 and has since been used in restaurants, shops and hotels, such as Sheraton Hotels, Disneyland, and AT&T. The technology has been developed by many other companies, such as Circle Twelve with its DiamondTouch screen, which recognises the fingertips of different users.

The beauty of gesture-based multi-touch interfaces are that they allow almost any device to be used like a mouse and because they use cameras they are not restricted to the conventional touchscreen detection methods like pressure, temperature or electrical resistance. So, for example, you can use a paintbrush to create digital strokes on the screen. Unfortunately, many surface computing devices are quite expensive, costing upwards of twenty times that of a normal screen.

 
Look ma, no screen


So, the next step is to get rid of the screen altogether. And so in March this year Microsoft revealed the Mobile Surface, which uses a cellphone camera and miniature projector to project the screen onto a flat surface. A laser projector is used to keep the image nice and bright in daylight and to avoid the need to refocus the image while a camera picks up movement between the projector and the image. Microsoft is not the only company pursuing this technology - Light Blue Optics has created a similar device called the LightTouch, which uses infrared light to detect movement on flat objects.

While this is all very well, there might be times when flat surfaces are nowhere to be found. The solution? Use your skin. In March this year researchers at Carnegie Mellon University and Microsoft unveiled Skinput, a system that lets people use their own hands and arms as touchscreens. The system uses tiny projectors to project images onto a person’s body and detects ultralow-frequency sounds made when people tap the different parts of their skin to determine movement. At the moment it’s 95% accurate and wearable in a wristband.

Similarly, MIT has created a system called SixthSense that also uses a projector to display information on a variety of surfaces, but tracks finger movement with a camera that detects the position of coloured markers on the user’s fingertips. The projector and camera are carried on a pendant around the neck, so the user could project a map onto a wall and navigate through it using gestures. Or, if the user wants to check his e-mail he could simply draw an ‘@’ symbol in the air and SixthSense would pick it up and use the user’s cellphone to retrieve his mail.

Not all surface computing needs to use touch. SixthSense incorporates some gesture recognition and this is the next stage in the development of the ultimate natural user interface (one that becomes effectively invisible to the user). Imagine that in the future you won’t type into a keyboard but will type into the air. Although more likely speech recognition software will mean you’ll avoid typing altogether, but that’s a different story for a different time.

 
Gesture-based control


In January this year an Israeli startup company called PrimeSense demonstrated a motion-sensing interface technology that allows people to use hand gestures to move objects on a screen, even if they are standing five metres away. A sensor projects a beam of light into a room to build a 3D picture of all the objects in it. It picks up hand gestures and software translates them into commands. The technology can (and is currently being) be added to TV sets, set-top boxes and DVD players, amongst other devices. PrimeSense is not the only one - Hitachi and JVC have created gesture-controlled TVs and other companies are also hard at work on similar devices, including competitors in Belgium, Germany and Switzerland.

The PrimeSense software can be used to play computer and TV games, and works much the same as Microsoft’s Kinect system (previously known as Project Natal). This lets users stand in front of a screen and use full-body gestures (such as kicks and punches) to control an on-screen avatar. Kinect will be released for Xbox 360 in November this year and will let users play games without a controller. The device can also recognise faces and voices in addition to 3D motion. So no longer will you need a Wii-mote to simulate digital motion. But such gesture controlled devices will not just be used for playing games and everything from PDAs to laptops, cellphones and computers will soon be using controlled via one’s fingertips.

The days of keyboards, keypads and remotes are numbered - the wave of natural user interface devices revealed this year is only the beginning. Invisible natural user interfaces are on their way. Just watch this (invisible) space.

No comments:

Post a Comment