Converting images into music helps blind identify, reach for object

Musical tones encodes vertical and horizontal location, brightness, and colors
July 9, 2012

 

Left: An illustration of the EyeMusic SSD, showing a user with a camera mounted on the glasses, and scalp headphones, hearing musical notes that create a mental image of the visual scene in front of him. He is reaching for the red apple in a pile of green ones. Top right: close-up of the glasses-mounted camera and headphones; bottom right: hand-held camera pointed at the object of interest. (Credit Maxim Dupliy, Amir Amedi and Shelly Levy-Tzedek)

Scientists have trained blindfolded sighted participants to perform fast and accurate movements using a new Sensory substitution devices (SSD) called EyeMusic.

SSDs use sound or touch to help the visually impaired perceive the visual scene surrounding them. EyeMusic, developed by Hebrew University of Jerusalem researchers, uses pleasant musical tones and scales to help the visually impaired “see.”

It scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes according to a musical scale that will sound pleasant in many possible combinations. The image is scanned continuously, from left to right, and an auditory cue is used to mark the start of the scan. A hand-held camera can also be pointed at the object of interest.

The horizontal location of a pixel is indicated by the timing of the musical notes relative to the cue (the later it is sounded after the cue, the farther it is to the right), and the brightness is encoded by the loudness of the sound.

The EyeMusic’s algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (violin); Black is represented by silence.

“The notes played span five octaves and were carefully chosen by musicians to create a pleasant experience for the users,” says lead investigator Prof. Amir Amedi. Sample sound recordings are available at http://brain.huji.ac.il/em/.

The study demonstrated that EyeMusic can be used after a short training period (in some cases, less than half an hour) to guide movements. “The level of accuracy reached in our study indicates that performing daily tasks with an SSD is feasible, and indicates a potential for rehabilitative use,” the researchers said.

The study lends support to the hypothesis that representation of space in the brain may not be dependent on the modality with which the spatial information is received, and that very little training is required to create a representation of space without vision, using sounds to guide fast and accurate movements.