Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/autre/robust-emotion-recognition-using-spectral-and-prosodic-features/rao/descriptif_2906334
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=2906334

Robust Emotion Recognition using Spectral and Prosodic Features, 2013 SpringerBriefs in Speech Technology Series

Langue : Anglais

Auteurs :

Couverture de l’ouvrage Robust Emotion Recognition using Spectral and Prosodic Features
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 1 1.2 Emotion from psychological perspective . . . . . . . . . . . . . . . . . . . .. . 2 1.3 Emotion from speech signal perspective . . . . . . . . . . . . . . .. . . . . . . 3 1.3.1 Speech production mechanism . . . . . . . . . . . . . . . . . . . . . . . . .... 4 1.3.2 Source features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.3 System features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.4 Prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Emotional speech databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Applications of speech emotion recognition . . . . . . . . . . . . . . . . . . . 8 1.6 Issues in speech emotion recognition . . . . . . . . . . . . . . . . . . . . . . . 8 1.7 Objectives and scope of the work . . . . . . . . . . . . . . . . . . . . . . . . .. 9 1.8 Main highlights of research investigations . . . . . . . . . . . . . . . . . . . 10 1.9 Brief overview of contributions in this book . . . . . . . . . . . . . . .. . . 10 1.9.1 Emotion recognition using spectral features extracted from sub-syllabic regions and pitch synchronous analysis . . . . . . . 10 1.9.2 Emotion recognition using global and local prosodic features extracted from words and syllables . . . . . . . . . . . . . . 11 1.9.3 Emotion recognition using combination of features . . . . . . . . 11 1.9.4 Emotion recognition on real life emotional speech database . 11 1.10 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Robust Emotion Recognition using Pitch Synchronous and

Sub-syllabic Spectral Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1 Introduction . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 15 2.2 Emotional speech corpora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Indian Institute of Technology Kharagpur-Simulated Emotional Speech Corpus: IITKGP-SESC . . . . . . . . . . . . . . . 18 2.2.2 Berlin Emotional Speech Database: Emo-DB . . . . . . . . . . . . . 20 2.3 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 Linear prediction cepstral coefficients (LPCCs) . . . . . . . . . . . 21 2.3.2 Mel frequency cepstral coefficients (MFCCs) . . . . . . . . . . . . . 22 2.3.3 Formant features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.4 Extraction of sub-syllabic spectral features . . . . . . . . . . . . . . . 25 2.3.5 Pitch synchronous analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4 Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.1 Gaussian mixture models (GMM) . . . . . . . . . . . . . . . . . . . . . . 30 2.4.2 Auto-associative neural networks . . . . . . . . . . . . . . . . . . . . . . . 31 2.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3 Robust Emotion Recognition using Word and Syllable Level Prosodic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 45 3.2 Prosodic features: Importance in emotion recognition . . . . . . . . . . 46 3.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4 Extraction of global and local prosodic features . . . . . . . . . . . . . . . 51 3.4.1 Sentence level features . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 51 3.4.2 Word and syllable level features . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.1 Emotion recognition systems using sentence level prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.2 Emotion recognition systems using word level prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 60 3.5.3 Emotion recognition systems using syllable level prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 68 4 Robust Emotion Recognition using Combination of Excitation Source, Spectral and Prosodic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.1 Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 71 4.2 Feature combination: A study . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Emotion recognition using combination of excitation source and vocal tract system features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.4 Emotion recognition using combination of vocal tract system and prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5 Emotion recognition using combination of excitation source and prosodic features . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 78 4.6 Emotion recognition using combination of excitation source, system and prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 83 5 Robust Emotion Recognition using Speaking Rate Features . . . . . . . . . 87 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3 Two stage emotion recognition system . . . . . . . . . . . . . . . . . . . . . 92 5.4 Gross level emotion recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.5 Finer level emotion recognition . . . . . . . . . . . . . . . . . . . . .. . . . . . 94 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6 Emotion Recognition on Real Life Emotions . . . . . . .. . . . . . . . . . . . . 97 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.2 Real life emotion speech corpus . . . . . . . . . . . . . . . . . .. . . . . . . . . 98 6.3 Recognition performance on real life emotions . . . . . . . . . . . . . . . . 99 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7 Summary and Conclusions . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 103 7.1 Summary of the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.2 Contributions of the present work . . . . . . . . . . . . . . . . . . .. . . . . 105 7.3 Conclusions from the present work . . . . . . . . . . . . . . . . . . . . . . . 106 7.4 Scope for future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A MFCC Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 B Gaussian Mixture Model (GMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 B.1 Training the GMMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 B.1.1 Expectation Maximization (EM) Algorithm . . . . . . . . . . . . . . 116 B.1.2 Maximum a posteriori (MAP) Adaptation . . . . . . . . . . . . . . . 117 B.2 Testing . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 119 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

K. Sreenivasa Rao is at Indian Institute of Technology, Kharagpur, India.
Shashidhar G, Koolagudi is at Graphic Era University, Dehradun, India.

Deals with emotions in terms of how to characterize the emotions, how to acquire the emotion-specific information from speech conversations and finally how to incorporate the acquired emotion-specific information to synthesize the desired emotions

Proposes pitch synchronous and sub-syllabic spectral features for characterizing emotions

Explores global and local prosodic features at syllable, word and phrase levels to capture the emotion-discriminative information

Demonstrates real life emotions using hierarchical models based on speaking rate

Includes supplementary material: sn.pub/extras

Date de parution :

Ouvrage de 118 p.

15.5x23.5 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

Prix indicatif 52,74 €

Ajouter au panier