Automated lip-reading invented

It’s the end of the (privacy) world as we know it…
March 24, 2016

(credit: MGM)

New lip-reading technology developed at the University of East Anglia could help in solving crimes and provide communication assistance for people with hearing and speech impairments.

The visual speech recognition technology, created by Helen L. Bear, PhD, and Prof Richard Harvey of UEA’s School of Computing Sciences, can be applied “any place where the audio isn’t good enough to determine what people are saying,” Bear said. Those include criminal investigations, entertainment, and especially where are there are high levels of noise, such as in cars or aircraft cockpits, she said.

Bear said unique problems with determining speech arise when sound isn’t available — such as on video footage — or if the audio is inadequate and there aren’t clues to give the context of a conversation. Or on those ubiquitous annoying videos with music that masks speech. The sounds ‘/p/,’ ‘/b/,’ and ‘/m/’ all look similar on the lips, but now the machine lip-reading classification technology can differentiate between the sounds for a more accurate translation.

“We are still learning the science of visual speech and what it is people need to know to create a fool-proof recognition model for lip-reading, but this classification system improves upon previous lip-reading methods by using a novel training method for the classifiers,” said Bear.

“Lip-reading is one of the most challenging problems in artificial intelligence, so it’s great to make progress on one of the trickier aspects, which is how to train machines to recognize the appearance and shape of human lips,” said Harvey.

The research, part of a three-year project, was supported by the Engineering and Physical Sciences Research Council (EPSRC). The research will be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Shanghai.