Landmarks-assisted collaborative deep framework for automatic 4D facial expression recognition
Behzad, Muzammil; Vo, Nhat; Li, Xiaobai; Zhao, Guoying (2021-01-18)
M. Behzad, N. Vo, X. Li and G. Zhao, "Landmarks-assisted Collaborative Deep Framework for Automatic 4D Facial Expression Recognition," 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, 2020, pp. 1-5, doi: 10.1109/FG47880.2020.00023
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe202103036404
Tiivistelmä
Abstract
We propose a novel landmarks-assisted collaborative end-to-end deep framework for 4D facial expression recognition (FER). Using 4D face scan data, we calculate its various geometrical images, and afterwards use rank pooling to generate their dynamic images encapsulating important facial muscle movements over time. As well, the given 3D landmarks are projected on a 2D plane as binary images and convolutional layers are used to extract sequences of feature vectors for every landmark video. During the training stage, the dynamic images are used to train an end-to-end deep network, while the feature vectors of landmark images are used train a long short-term memory (LSTM) network. The finally improved set of expression predictions are obtained when the dynamic and landmark images collaborate over multi-views using the proposed deep framework. Performance results obtained from extensive experimentation on the widely-adopted BU-4DFE database under globally used settings prove that our proposed collaborative framework outperforms the state-of-the-art 4D FER methods and reach a promising classification accuracy of 96.7% demonstrating its effectiveness.
Kokoelmat
- Avoin saatavuus [31929]