Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure

1235
Arianna Mencattini, Eugenio Martinelli, Giovanni Costantini, Massimiliano Todisco, Barbara Basile , Marco Bozzali, Corrado Di Natale: Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure. In: Knowledge-Based Systems , vol. 63, pp. 68-81, 2014.

Abstract

Speech emotion recognition (SER) is a challenging framework in demanding human machine interaction systems. Standard approaches based on the categorical model of emotions reach low performance, probably due to the modelization of emotions as distinct and independent affective states. Starting from the recently investigated assumption on the dimensional circumplex model of emotions, SER systems are structured as the prediction of valence and arousal on a continuous scale in a two-dimensional domain. In this study, we propose the use of a PLS regression model, optimized according to specific features selection procedures and trained on the Italian speech corpus EMOVO, suggesting a way to automatically label the corpus in terms of arousal and valence. New speech features related to the speech amplitude modulation, caused by the slowly-varying articulatory motion, and standard features extracted from the pitch contour, have been included in the regression model. An average value for the coefficient of determination of (maximum value of for fear and minimum of for sadness) is obtained for the female model and a value for of (maximum value of for anger and minimum value of for joy) is obtained for the male model, over the seven primary emotions (including the neutral state).

    BibTeX (Download)

    @article{Mencattini2014,
    title = {Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure},
    author = {Arianna Mencattini and Eugenio Martinelli and Giovanni Costantini and Massimiliano Todisco and Barbara Basile 
     and Marco Bozzali and Corrado Di Natale},
    editor = {Elsevier},
    year  = {2014},
    date = {2014-04-02},
    journal = {Knowledge-Based Systems },
    volume = {63},
    pages = {68-81},
    abstract = {Speech emotion recognition (SER) is a challenging framework in demanding human machine interaction systems. Standard approaches based on the categorical model of emotions reach low performance, probably due to the modelization of emotions as distinct and independent affective states. Starting from the recently investigated assumption on the dimensional circumplex model of emotions, SER systems are structured as the prediction of valence and arousal on a continuous scale in a two-dimensional domain. In this study, we propose the use of a PLS regression model, optimized according to specific features selection procedures and trained on the Italian speech corpus EMOVO, suggesting a way to automatically label the corpus in terms of arousal and valence. New speech features related to the speech amplitude modulation, caused by the slowly-varying articulatory motion, and standard features extracted from the pitch contour, have been included in the regression model. An average value for the coefficient of determination  of  (maximum value of  for fear and minimum of  for sadness) is obtained for the female model and a value for  of  (maximum value of  for anger and minimum value of  for joy) is obtained for the male model, over the seven primary emotions (including the neutral state).},
    keywords = {Audio signal modulation, Circumplex model of emotions, Partial least square (PLS) regression, Pearson correlation coefficient, Pitch contour characterization, Speech emotion recognition (SER)},
    pubstate = {published},
    tppubtype = {article}
    }
    
    //

    Nessun articolo da mostrare