I recently came across news of a device that geeked me out. Its a neckband that can detect and analyze neural firings when we think about saying something, and translate them into audible words via speech synthesizer. Beyond the obvious use of bettering the lives of people who’ve lost their ability to speak, it could enable us to make phonecalls without having to actually talk (as is demonstrated in a video in this article). The creators of the device mention that they’ll have a product by the end of the year for people with ALS (a.k.a. Lou Gehrig’s Disease).
In my aforementioned geek-out craze I told my girlfriend about the device, called the Audeo, who immediately identified the problem of the device saying a thought you don’t actually want the other person to hear. You’re on the phone with your boss when you suddenly hear the device blurt out “Are you never going to shut up about those damn TPS Reports!?“.
Good point. But the creators say the device can differentiate between things that you’re thinking, and things that you actually want to say. You have to think about using your voice for the device to pick up on it.
I’m sure that this ability is a beneficial byproduct of making the device a “collar” around your neck monitoring the nerves that control muscles of the larynx.
Our Head’s Too Messy, Go for the Neck
The device is not a brain interface worn on the head, so it stands to reason that (a) they are monitoring neural activity to the muscles that control speech (larynx/voicebox), and (b) by doing so it’s easier to detect things that you actually want to say, as opposed to what you’re casually thinking.
The larynx is innervated by branches of the vagus nerve on each side. Sensory innervation to the glottis and supraglottis is by the internal branch of the superior laryngeal nerve. The external branch of the superior laryngeal nerve innervates the cricothyroid muscle. Motor innervation to all other muscles of the larynx and sensory innervation to the subglottis is by the recurrent laryngeal nerve.
However, I’m sure we’ve all been in situations where we are on the verge of saying something, perhaps in an emotionally colored debate, but think twice and eventually say something less aggressive. In such a situation I’m sure the device could accidentally be triggered. So the user must make sure to be perfectly balanced, one with himself and the universe before using it for important conversations. At least for now.
Writing this I get the idea that this problem could be overcome with AI; natural language processing could detect potentially insulting sentences or harsh language. The user could then be prompted to verify whether he meant to say a particular sentence (whether this would introduce too much lag is another question).
The device, currently able to recognize 150 words, is under development by Ambient Corporation, co-founded by Micahel Callahan who demonstrates the device in the following video at the TI Developer Conference’08 by placing a “voiceless phonecall”.
For the past few decades, humans have increasingly been extending their intellectual capacity with the use of machines. An example is using mobile devices to retrieve knowledge on the fly — making each device-wielding human more intellectually capable than one 20 years ago. But this a matter of perspective, and many only see future invasive devices as “extensions of intelligence” (e.g. neural-interfaced memory storage device) and everything else as tools.
Modern technology is starting to blur this line between intellectual extensions and tools. The “Smartest Person in the Room” project is one of these: Using the Audeo, a person thinks of a question — the question is consequently sent to a web knowledge-application, the answer found and tunneled back out through the speakers. Question never audibly asked, yet answered. Quite brilliant.
Looking forward to monitoring the developments of this project, feeding my interest in machine interfaces right along Emotiv’s Epoc and Neurosky’s non-invasive neural interfaces.