Diphone-based speech recognition using neural networks
Cantrell, Mark E.
McGhee, Robert B.
Boger, Dan C.
MetadataShow full item record
Speaker-independent automatic speech recognition (ASR) is a problem of long-standing interest to the Department of Defense. Unfortunately, existing systems are still too limited in capability for many military purposes. Most large-vocabulary systems use phonemes (individual speech sounds, including vowels and consonants) as recognition units. This research explores the use of diphones (pairings of phonemes) as recognition units. Diphones are acoustically easier to recognize because coarticulation effects between the diphones's phonemes become recognition features, rather than confounding variables as in phoneme recognition. Also, diphones carry more information than phonemes, giving the lexical analyzer two chances to detect every phoneme in the word. Research results confirm these theoretical advantages. In testing with 4490 speech samples from 163 speakers, 70.2% of 157 test diphones were correctly identified by one trained neural network. In the same tests, the correct diphone was one of the top three outputs 89.0% of the time. During word recognition tests, the correct word was detected 85% of the time in continuous speech. Of those detections, the correct diphone was ranked first 41.6% of the time and among the top six 74% of the time. In addition, new methods of pitch-based frequency normalization and network feedback-based time alignment are introduced. Both of these techniques improved recognition accuracy on male and female speech samples from all eight dialect regions in the U.S. In one test set, frequency normalization reduced errors by 34%. Similarly, feedback-based time alignment reduced another network's test set errors from 32.8% to 11.0%.
Showing items related by title, author, creator and subject.
Liu, I. Kang (1987-03);This thesis examined whether American English speech recognition technology can be used by Chinese speakers, in their native tongue, to achieve a reasonable degree of recognition accuracy. Three experiments were completed. ...
Hollabaugh, Jon Dale. (Monterey, California: U.S. Naval Postgraduate School, 1963);Speech is one of the most inefficient methods of communication. Therefore, there has been a continuing effort to devise means to reduce the redundancy, that is, compress the bandwidth required for speech communication ...
Bulbuller, Gokhan. (Monterey California. Naval Postgraduate School, 2006-03);Speech collected through a microphone placed in front of the mouth has been the primary source of data collection for speech recognition. There are only a few speech recognition studies using speech collected from the ...