Recognizing Emotion in Speech using Neural Networks

K. Dai, H.J. Fell, and J. MacAuslan (USA)


Voice recognition software, emotion recognition, speech landmarks, neural networks


Emotion recognition is an important factor of affective computing and has potential use in assistive technologies. In this paper we used landmark and other acoustic features to recognize different emotional states in speech. We analyzed 2442 utterances from the Emotional Prosody Speech and Transcripts corpus and extracted 62 features from each utterance. A neural network classifier was built to recognize different emotional states of these utterances. We obtained over 90% accuracy in distinguishing hot anger and neutral states, over 80% accuracy in distinguishing happy and sadness as well as in distinguishing hot anger and cold anger. We also achieved 62% and 49% accuracy for classifying 4 and 6 emotions respectively. We had 20% accuracy in classifying all 15 emotions in the corpus which is a large improvement over other studies. We plan to apply our work to developing a tool to help people who have difficulty in identifying emotion.

Important Links:

Go Back

Rotating Call For Paper Image