Robots with insect brains

February 14, 2014
robot_insect_brain

(Credit: Freie Universität Berlin)

German researchers have developed a robot that mimics the simple nervous system used for olfactory learning in the honeybee, using color instead of odors.

The researchers have installed a camera on a small robotic vehicle connected to a computer. The computer program replicates, in a simplified way, the sensorimotor neural network of the insect brain and operates the motors of the robot wheels to control its motion and direction based on the colors.

“The network-controlled robot is able to link certain external stimuli with behavioral rules,” said Professor Martin Paul Nawrot, head of the research team and professor of neuroscience at Freie Universität Berlin. “Much like honeybees learn to associate certain flower colors with tasty nectar, the robot learns to approach certain colored objects and to avoid others.”

The learning experiment

The scientists located the network-controlled robot in the center of a small arena with red and blue objects on the walls. Once the robot’s camera focused on an object with the desired color,  the scientists triggered a light flash. This signal activated a “reward sensor nerve cell” in the spiking neural network. The simultaneous processing of red and the reward caused the robot to move toward the object; blue made it move backwards.

Left: robot hardware. The camera output is processed on the Arduino board and is sent to the open-source iqr spiking neural network simulator software as a 1 or 0, depending on whether or not a colored region was found, and is translated into spike trains. Right: neural network architecture from sensory input to motor output. Red or blue connections indicate excitatory or inhibitory synapses. Green connections indicate modulatory synapses that are adjusted during reinforcement. Numbers under each group indicate the number of artificial neurons (credit: L. I. Helgadóttir et al.).

“Within seconds, the robot accomplishes the task to find an object in the desired color and to approach it,” explained Nawrot. “Only a single learning trial is needed, similar to experimental observations in honeybees.”

The scientists are planning to expand their neural network by adding more learning principles.

Future real-world applications

“Our work and the paper focus on basic science,” Tim Landgraf, head of the Biorobotics Lab at Freie Universität Berlin, explained to KurzweilAI in an email interview. “We first want to understand how fundamental processes like learning and memory enable the animal (many of our studies use the honeybee as a model) to accomplish complex tasks.

“Ultimately, this will improve our understanding of the function of our own human brain. And once we understood how we can employ realistic, brain-like processing structures to solve real-world problems, this will have an impact on how robots or artificial systems in general are being programmed.

“Rather than writing millions of lines of code for solving problems, we will lean back and watch adaptive, neural systems learn the structure of their environments. First as virtual brains in a simulation of the world and then, once they have sufficiently matured, in the real world.

“As far as we know, we were the first to show that robots can be conditioned in a one-shot learning experiment with spiking neural networks.” However,  he admits that the biggest unknown is the neuromorphic (spiking) hardware. “Currently, researchers are using simulations on big computing machines, nothing that would fit on a robot. Neuromorphic chips emulate neuronal activity in small analog circuits. They might be available commercially within the next ten years or so. I can’t say whether they will be powerful enough (number of neurons, synaptic plasticity, etc.) to be applicable in complex real-world scenarios by then.”

Funding for the research is provided by the National Bernstein Network Computational Neuroscience in Germany and the German Federal Ministry of Education and Research.


Abstract of 6th International IEEE/EMBS Conference on Neural Engineering (NER) paper

Insects show a rich repertoire of goal-directed and adaptive behaviors that are still beyond the capabilities of today’s artificial systems. Fast progress in our comprehension of the underlying neural computations make the insect a favorable model system for neurally inspired computing paradigms in autonomous robots. Here, we present a robotic platform designed for implementing and testing spiking neural network control architectures. We demonstrate a neuromorphic realtime approach to sensory processing, reward-based associative plasticity and behavioral control. This is inspired by the biological mechanisms underlying rapid associative learning and the formation of distributed memories in the insect.