Pages

Thursday, 10 December 2015

Ear and Tongue Sensors Combine to Understand -Silent Speech


Latest Invention - `Silent Speech’ – Magnetic Tongue Control System


Latest inventions can now recognize `silent speech’ by keeping checks on the tongue and ears. On training it to identify suitable phrases, it can enable people who tend to be disabled or work in loud environment, to control quietly the wearable devices.

This new devices depends in part on a magnetic tongue control system which had been earlier designed to aid people with paralysis to drive a power wheelchair through tongue movements. However, the researchers were worried that the technology which depends on a magnetic tongue piercing or a sensor affixed to the tongue could be too disturbing for some of the users.

The Tongue Drive System – TDS is the work of a team under the guidance of Jeonghee Kim at Georgia Institute of Technology in Atlanta, US. The system needs users to pierce their tongue with a barbell shaped device. A professor at the Georgia Institute of Technology and technical lead on the wearable computer Google Glass, Thad Starner, had been motivated to attempt ear movements after an appointment with a dentist.

Dentist – Motivated Silent Speech Recognition


The dentist had stuck a finger in Starner’s ear and had asked him to bite down, some quick test for jaw function. As his jaw seemed to move, so also the space in his ears moved. This led him to wonder if he could do silent speech recognition with that experiment.

The subsequent device tends to combine tongue control with earpieces which seems somewhat like headphones and each of it is embedded with a proximity sensor which utilises infrared light to map the changing shape of the ear canal. Various words require various jaw movements, deforming the canal in slightly different ways.

The team had listed 12 phrases which could be essential, for the test, like `I need to use the bathroom’, or `Give me my medicine, please’. People were then recorded, repeating these while wearing the device. With the tongue and ear trackers in, the software could recognize what the wearer was saying, almost 90% of the time. With the use of ear trackers only, the accuracy seemed a bit lower. The researchers expect to build a phrasebook of useful words as well as sentences which could be recognisable from the ear data.

Jaw-emes


A graduate student at Georgia Technology, Abdelkareem Bedri states that they call them `jaw-emes’. Besides this, they have also started looking into other probable uses for the ear data. One experiment with an improved version of the ear trackers had reached 96% accuracy in recognizing simple jaw gestures, like a move from left to right.

These types of gestures would enable the wearer discreetly control the wearable device. Heartbeat monitoring too seems possible and can support the system to verify that it is placed properly in the ears of the wearer.

 Bruce Denby tends to work on silent speech in his lab at the Pierre and Marie Curie University in Paris and states that demonstrating that the technology is `industry ready, could be crucialin bringing the technology to the market. He further added that `the true holy grail of silent speech is continuous speech recognition. However, the potential of recognizing even a limited set of phrases is a tremendous boon already for some disabled individuals.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.