Microphone array aids deaf in discerning speech
June 8, 2001 | Source: KurzweilAI
Dramatic improvements in speech discernment using signal processing have been developed by Stanford University professor of electrical engineering Bernard Widrow and his students.
Dr. Widrow reported the breakhrough in a keynote speech at the recent annual meeting of the Acoustical Society of America.
The Directional Hearing ARray (D-HEAR) uses six tiny microphones and signal-processing electronics (worn as a necklace) to enable people with profound hearing loss to distinguish speech in a noisy room for the first time.
Microphones in the necklace pick up the sound and transmit it to signal-processing chips that use an adaptive signal processing algorithm to reduce noise by giving different weights to input sounds from the various microphones.
The user orients his or her body toward the speaker and surrounding sound is minimized. The microphone array is able to home in on the desired signal and reduce echoes and other undesirable auditory effects while increasing clarity of the dominant signal. The optimized signal is then amplified and sent through a conducting neckloop, which wirelessly transmits a magnetic signal to the telecoil in the user’s hearing aid.
Widrow co-developed the least mean squared (LMS) algorithm for finding the optimal weight vector for
suppressing unknown noise, widely used in high-speed modems.