VIEIRA, V. J. D.; VIEIRA, VINÍCIUS J. DIAS.; VIEIRA, VINICIUS J. D.; http://lattes.cnpq.br/6556015349267133; VIEIRA, Vinícius Jefferson Dias.
Resumo:
The goal of this work is to study the effects of non-stationary acoustic variations caused
by emotional states and stress conditions. In the literature, there is still no pure acoustic
attribute for emotion and stress recognition. By using the index of non-stationarity (INS), it
is observed that different affective states have different degrees of non-stationarity. As for the
detection of such variations, it is employed the empirical mode decomposition (EMD), which is
a nonlinear technique that is suitable for non-stationary signals. Thus, the main contribution of
this work is the proposal of the HHHC vector (Hilbert-Huang-Hurst Coefficients) as a new non-
linear acoustic feature for the multistyle classification of emotional states and stress conditions.
The HHHC is a vocal source feature that is based on adaptive decomposition (EMD, which
emphasizes affective acoustic variations) and Hurst coefficients estimation (which are related
to the glottal source excitation) in each decomposition mode. Another contribution is the use
of INS as additional information to the HHHC vector (HHHC+INS). In order to analyze the
robustness of the proposed acoustic feature in different languages and speaking contexts, it is
considered five databases. Four of them in the context of emotions and one in the context of
stress conditions. As baseline acoustic features to comparing with HHHC, it is used the vector
of Hurst coefficients (pH), the Mel-Frequency Cepstral Coefficients (MFCC) and TEO (Teager-
Energy-Operator)-based feature. Another important contribution of this Thesis is the proposal
of α-integrated Gaussian Mixture Models (α-GMM) for the affective states representation and
classification. Its performance is compared to competing classifiers: GMM, Hidden Markov
Models (HMM) and Support Vector Machines (SVM). Results demonstrate that the proposed
HHHC acoustic feature leads to significant classification improvement when compared to the
baseline acoustic features. Also, the results show that α-GMM outperforms the competing
classification methods in all acoustic databases scenarios.