Mimicking How the Brain Recognizes Street Scenes

February 8, 2007 | Source: KurzweilAI

A computational model of how the brain processes visual information in a complex, real world task has been applied to recognizing the objects in a busy street scene.

Scientists in Tomaso Poggio’s laboratory at the McGovern Institute for Brain Research at MIT “showed” the model randomly selected images so that it could “learn” to identify commonly occurring features in real-word objects, such as trees, cars, and people. In supervised training sessions, the model used those features to label by category the varied examples of objects found in digital photographs of street scenes: buildings, cars, motorcycles, airplanes, faces, pedestrians, roads, skies, trees, and leaves.

The Poggio model for object recognition takes as input the unlabled images of digital photographs from the Street Scene Database (top) and generates automatic annotations

The Poggio model for object recognition takes as input the unlabled images of digital photographs from the Street Scene Database (top) and generates automatic annotations

Compared to traditional computer-vision systems, the biological model was surprisingly versatile. Traditional systems are engineered for specific object classes. For instance, systems engineered to detect faces or recognize textures are poor at detecting cars. In the biological model, the same algorithm can learn to detect widely different types of objects.

To test the model, the team presented full street scenes consisting of previously unseen examples from the Street Scene Database. The model scanned the scene and, based on its supervised training, recognized the objects in the scene. The upshot is that the model learned from examples, which, according to Poggio, is a hallmark of artificial intelligence.