Recognition of in-ear microphone speech data using multi-layer neural networks

Loading...
Thumbnail Image
Authors
Bulbuller, Gokhan.
Subjects
Advisors
Fargues, Monique P.
Vaidyanathan, Ravi
Date of Issue
2006-03
Date
Publisher
Monterey, CA; Naval Postgraduate School
Language
Abstract
Speech collected through a microphone placed in front of the mouth has been the primary source of data collection for speech recognition. There are only a few speech recognition studies using speech collected from the human ear canal. In this study, a speech recognition system is presented, specifically an isolated word recognizer which uses speech collected from the external auditory canals of the subjects via an in-ear microphone. Currently, the vocabulary is limited to seven words that can be used as control commands for a wide variety of applications. The speech segmentation task is achieved by using the short-time signal energy parameter and the short-time energy-entropy feature (EEF), and by incorporating some heuristic assumptions. Multi-layer feedforward neural networks with two-layer and three-layer network configurations are selected for the word recognition task and use real cepstrum (RC) and mel-frequency cepstral coefficients (MFCCs) extracted from each segmented utterance as characteristic features for the word recognizer. Results show that the neural network configurations investigated are viable choices for this specific recognition task as the average recognition rates obtained with the MFCCs as input features for the two-layer and three-layer networks are 94.731% and 94.61% respectively on the data investigated. Average recognition rates obtained using the RCs as features on the same network configurations are 86.252% and 86.7% respectively.
Type
Thesis
Description
Series/Report No
Organization
Naval Postgraduate School (U.S.)
Identifiers
NPS Report Number
Sponsors
Funder
Format
xxii, 163 p. : col. ill. ;
Citation
Distribution Statement
Approved for public release; distribution is unlimited.
Rights
Collections