Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure

102
Arianna Mencattini, Eugenio Martinelli, Giovanni Costantini, Massimiliano Todisco, Barbara Basile , Marco Bozzali, Corrado Di Natale (2014): Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure. In: Knowledge-Based Systems , 63 , pp. 68-81, 2014.

Abstract

Speech emotion recognition (SER) is a challenging framework in demanding human machine interaction systems. Standard approaches based on the categorical model of emotions reach low performance, probably due to the modelization of emotions as distinct and independent affective states. Starting from the recently investigated assumption on the dimensional circumplex model of emotions, SER systems are structured as the prediction of valence and arousal on a continuous scale in a two-dimensional domain. In this study, we propose the use of a PLS regression model, optimized according to specific features selection procedures and trained on the Italian speech corpus EMOVO, suggesting a way to automatically label the corpus in terms of arousal and valence. New speech features related to the speech amplitude modulation, caused by the slowly-varying articulatory motion, and standard features extracted from the pitch contour, have been included in the regression model. An average value for the coefficient of determination of (maximum value of for fear and minimum of for sadness) is obtained for the female model and a value for of (maximum value of for anger and minimum value of for joy) is obtained for the male model, over the seven primary emotions (including the neutral state).

    BibTeX