ERBAL
- Space Physiology
- Robotics
- …
- Space Physiology
- Robotics
ERBAL
- Space Physiology
- Robotics
- …
- Space Physiology
- Robotics
Silent Speech Recognition
by Dr. Walck, Riley Fleming, Michael Fornito, and Levi Lingsch
Project Summary
With the advent of spaceflight related augmented reality systems in next-generation spacesuits, no current effective means of controlling display elements have emerged. Furthermore, the current input schemes such as acoustic speech recognition and hand gestures present a major bottleneck in the application and capabilities of these displays. We believe that silent speech interfaces are potential solutions capable of fluid and high bandwidth information transfer through non-invasive skin surface Electromyograph (EMG) electrodes.
To do this, we are developing a silent speech interface that receives input from subvocalizations via skin surface EMG electrodes and decodes this into commands that interact with an augmented reality display. Collected EMG data will train a neural network that functions as a classifier to determine the subject’s subvocal input against a command library.
Ultimately the user will silently send commands through subvocalizations to an Augmented Reality device. The applications of the resulting technology are far-reaching, with potential uses in space, defense, aviation, and consumer technology.
--Goals--
- Develop a silent speech interface that can deliver live commands via subvocalization to an augmented reality display.
- Demonstrate the effectiveness of silent speech input at high accuracy levels among a test population.
- Train a neural network on a silent speech command library capable of future command addition.
About Us
Our Mission
Join the Team
Resources
Tutorials
Contact Us
Phone/Zoom ID:
386-226-7418
christine.walck@
© 2019