Reading lips is a skill usually reserved for fictional spies or the hearing impaired, but researchers have spent years trying to gift the talent to computers, too. A device capable of automated lip-reading would certainly be a game changer, raising questions of personal privacy while simultaneously creating new opportunities in the accessibility and security industries. Don’t get too nervous (or excited) though — Ahmad Hassanat, a researcher at Mu’Tah University in Jordan, says we have a long way to go before machine eyes can tell what we’re saying.
Hassanat explains that we have major hurdles to leap before we can expect machines to decode lip movement: human speech uses more than 50 distinct sounds to form words and syllables, but the mouth itself only forms between 10 and 14 distinguishable shapes. Lip reading isn’t just simply recognizing and putting together the sounds those shapes represent — it’s partially guesswork. To suss out exactly what sounds a speaker is making, lip readers have to take in body language, facial expressions and the context of the conversation to help them decipher words.
The researcher’s own experiments have produced an average success of 76-percent, but Hassanat says we still have a long way to go — in addition to missing out on contextual clues, he says, automated systems often fumble when reading the words of bearded men. You can read his write up for yourself at the source link below.
[Image credit: Chev Wilkinson]
This article is automatically posted by WP-AutoPost : WordPress AutoBlog plugin
Documentation, WP-AutoPost
Lip reading is still too hard for computers
No comments:
Post a Comment