Acoustical and Environmental Robustness in Automatic Speech Recognition, Softcover reprint of the original 1st ed. 1993
The Springer International Series in Engineering and Computer Science Series, Vol. 201

Author:

Language: English
Cover of the book Acoustical and Environmental Robustness in Automatic Speech Recognition

Subject for Acoustical and Environmental Robustness in Automatic...

158.24 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Publication date:
186 p. · 15.5x23.5 cm · Paperback
The need for automatic speech recognition systems to be robust with respect to changes in their acoustical environment has become more widely appreciated in recent years, as more systems are finding their way into practical applications. Although the issue of environmental robustness has received only a small fraction of the attention devoted to speaker independence, even speech recognition systems that are designed to be speaker independent frequently perform very poorly when they are tested using a different type of microphone or acoustical environment from the one with which they were trained. The use of microphones other than a "close­ talking" headset also tends to severely degrade speech recognition -performance. Even in relatively quiet office environments, speech is degraded by additive noise from fans, slamming doors, and other conversations, as well as by the effects of unknown linear filtering arising reverberation from surface reflections in a room, or spectral shaping by microphones or the vocal tracts of individual speakers. Speech-recognition systems designed for long-distance telephone lines, or applications deployed in more adverse acoustical environments such as motor vehicles, factory floors, oroutdoors demand far greaterdegrees ofenvironmental robustness. There are several different ways of building acoustical robustness into speech recognition systems. Arrays of microphones can be used to develop a directionally-sensitive system that resists intelference from competing talkers and other noise sources that are spatially separated from the source of the desired speech signal.
List of Figures.- List of Tables.- Foreword.- Acknowledgments.- 1. Introduction.- 1.1. Acoustical Environmental Variability and its Consequences.- 1.2. Previous Research in Signal Processing for Robust Speech Recognition.- 1.3. Towards Environment-Independent Recognition.- 1.4. Monograph Outline.- 2. Experimental Procedure.- 2.1. An Overview of SPHINX.- 2.2. The Census Database.- 2.3. Objective Measurements.- 2.4. Baseline Recognition Accuracy.- 2.5. Other Databases.- 2.6. Summary.- 3. Frequency Domain Processing.- 3.1. Multi-Style Training.- 3.2. Channel Equalization.- 3.3. Noise Suppression by Spectral Subtraction.- 3.4. Experiments with Sphinx.- 3.5. Summary.- 4. The SDCN Algorithm.- 4.1. A Model of the Environment.- 4.2. Processing in the Frequency Domain: The MMSEN Algorithm.- 4.3. Processing in the Cepstral Domain: The SDCN Algorithm.- 4.4. Summary.- 5. The CDCN Algorithm.- 5.1. Introduction to the CDCN Algorithm.- 5.2. MMSE Estimator of the Cepstral Vector.- 5.3. ML Estimation of Noise and Spectral Tilt.- 5.4. Implementation Details.- 5.5. Summary of the CDCN Algorithm.- 5.6. Evaluation Results.- 5.7. Summary.- 6. Other Algorithms.- 6.1. The ISDCN Algorithm.- 6.2. The BSDCN Algorithm.- 6.3. The FCDCN Algorithm.- 6.4. Environmental Adaptation in Real Time.- 6.5. Summary.- 7. Frequency Normalization.- 7.1. The Use of Mel-scale Parameters.- 7.2. Improved Frequency Resolution.- 7.3. Variable Frequency Warping.- 7.4. Summary.- 8. Summary of Results.- 9. Conclusions.- 9.1. Contributions.- 9.2. Suggestions for Future Work.- Appendix I. Glossary.- Appendix II. Signal Processing in Sphinx.- Appendix III. The Bilinear Transform.- Appendix IV. Spectral Estimation Issues.- Appendix V. MMSE Estimation in the CDCN Algorithm.- Appendix VI. Maximum Likelihood via the EM Algorithm.- Appendix VII. ML Estimation of Noise and Spectral Tilt.- Appendix VIII. Vocabulary and Pronunciation Dictionary.- References.