Reverse-Engineering the Visual Process
December 19, 2001 | Source: KurzweilAI
Researchers at the Office of Naval Research are using a combination of engineering and neurobiology to model mammalian brain processes.They are learning how the architecture and physiological properties of cells in visual cortex integrate visual cues for target recognition.
“Right now we’re building a cellular-level model of a small piece of visual cortex,” says Dr. Leif Finkel, head of the University of Pennsylvania’s Neuroengineering Research Lab. “It’s a very detailed computer simulation which reflects with some accuracy at least the basic operations of real neurons.” His colleague, Kwabena Boahen, builds VLSI computer chips that reproduce cortical wiring, producing output spikes that closely match real retinae and may lead to retinal implants.
“We’ve asked them to take a computational approach to neuroscience,” says Hawkins, Program Officer in ONR’s Cognitive and Neural Sciences Division. “They’re looking at object-recognition systems that mimic the brain’s ability to find patterns in highly cluttered visual scenes by integrating information derived from bottom-up, top-down and horizontal connections among neurons in the primary visual cortex.”
The Defense Department is interested in ways to build systems that can instantly pick out an individual face in a crowd and parse a visual scene into its many parts, says Hawkins. “The goal is to use engineering analysis to discern the principles of neural function, and then to use these principles in the design of neuromorphic systems. Taken another step, we could use this same principle to exploit motion information for target tracking in noise and clutter.”