Advertisement

  • News
  • Columns
  • Interviews
  • BW Communities
  • Events
  • BW TV
  • Subscribe to Print
  • Editorial Calendar 19-20
BW Businessworld

Gesture Rule

Photo Credit :

Ever encountered a situation when your hands were dirty and you still needed to operate a computer? The classic instance is when you are in the kitchen cooking and want to look up some information on a computer. Today, computers, with their touch or keyboard inputs, cannot be used under such instances. What if the PC understands your gestures?

At the Yokohama Siggraph (special interest group on graphics and interactive techniques) Asia conference on 19 December, engineers from Massachusetts Institute of Technology (MIT) showed a new system that would let computers recognise hand gestures. There are several gesture-recognition system prototypes being developed around the world, but many of them are expensive and do not lend themselves to easy commercialisation. The MIT prototype, claim the engineers, are simpler and easier to manufacture. It consists of an LCD screen that could sense what your hand is doing.

This means you can drag windows across the screen by pointing at them and moving the fingers. You could also rotate 3D objects using your hand. The uses for such technology, when cheap and widely incorporated in computers, are easy to foresee. You can, of course, use computers when your hands are dirty, but they could also be used in public computers without letting users touch the screens (an important attribute during epidemics). You could also provide an extra dimension to your inputs: you can move your fingers in three dimensions away from the screen.

Several organisations have attempted to take computers along this path, but using very different methods. One of these methods, made famous by Pranav Mistry of MIT, is by using a wearable tracking tag on the finger. The other methods consist of using a camera on the LCD screen or behind the screen. Wearable devices are cumbersome, while putting a camera on the screen does not work perfectly. For example, camera on the screen needs the user to be at a certain distance, and does not provide a smooth transition between gestures and touch inputs.

The current MIT solution is not to use cameras but an array of sensors behind the screen. The LCD screen rapidly alternates between stages when it is opaque and transparent to light, but this happens so quickly — in one 30th of a second — that the viewer does not notice it. But the sensors get enough time to capture the light. Each sensor gets a slightly different image because of the angle differences, and the final image is put together through computation. MIT research scientist Henry Holtzman, who worked on the project, thinks the device is far closer to commercialisation as LCDs that allow such technologies are already being developed.

There are other developments taking place around the world. One of them is at Microsoft Research's office in Cambridge, UK. Called SecondLight, this prototype uses a camera behind the screen. SecondLight introduces a second projected screen, which can have information different from the main screen, but it also incorporates the ability to follow gestures. We do not quite know when this technology will be commercialised. Microsoft has already introduced gesture recognition to gaming, and plans to take it to the personal computer as well.

The day is not far away when we would use three kinds of inputs to the computer: keyboard, touch and gesture. One day, within a decade, gesture could become the primary way of interacting with a PC.

(This story was published in Businessworld Issue Dated 28-12-2009)


Tags assigned to this article:
magazine mit tech talk technology lcd p hari