Comparison of glottal closure instants detection algorithms for emotional speech

Loading...
Thumbnail Image
Journal Title
Journal ISSN
Volume Title
Conference article in proceedings
Date
2020-05
Major/Subject
Mcode
Degree programme
Language
en
Pages
5
7379-7383
Series
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
Abstract
In production of voiced speech, epochs or glottal closure instants (GCIs) refer to the instants of significant excitation of the vocal tract. Extraction of GCIs is used as a pre-processing stage in many areas of speech technology, such as in prosody modification, speech synthesis and voice source analysis. In the past decades, several GCI detection algorithms have been developed and most of them provide excellent results for speech signals produced using modal (normal) type of phonation. There are, however, no studies comparing multiple state-of-the-art GCI detection methods in emotional speech. In this paper, we compare six GCI detection algorithms using emotional speech and known evaluation metrics. We use the Berlin EMO-DB acted emotional speech database which contains seven emotions and simultaneous electroglottography (EGG) recordings as ground truth. The results show that all six GCI detection algorithms give best performance in processing speech of neutral emotion and that the performance degrade particularly in emotions of high arousal (anger and joy). To improve the performance of GCI detection in emotional speech, the study underlines the importance of local average pitch period estimates.
Description
avaa julkaisu, kun artikkeli saatavilla
Keywords
Emotions, Epochs, Excitation source, Glottal Closure Instants, Speech analysis
Other note
Citation
Kadiri , S , Alku , P & Yegnanarayana , B 2020 , Comparison of glottal closure instants detection algorithms for emotional speech . in 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings . , 9054737 , Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing , IEEE , pp. 7379-7383 , IEEE International Conference on Acoustics, Speech, and Signal Processing , Barcelona , Spain , 04/05/2020 . https://doi.org/10.1109/ICASSP40776.2020.9054737