Robots use Kinect to understand our world

July 22, 2011 | Source: New Scientist Tech

Researchers at Cornell University are teaching robots to understand the context of their surroundings so that they can pick out individual objects in a room.

Microsoft’s Kinect sensor perceives real-world 3-D scenes by combining two visible-light cameras with depth information from an infrared sensor. The researchers’ algorithm learns to recognize particular objects by studying images labelled with descriptive tags such as “wall,” “floor,” and “tabletop.”

To find out how the algorithm performed in a real-world setting, the researchers mounted a Kinect on a mobile robot and asked it to find a keyboard. The robot began by examining its surroundings until itĀ spotted a computer monitor, and then moved in for a closer look — by “knowing” that keyboards are often found nearby.