CFP last date
22 July 2024
Call for Paper
August Edition
IJCA solicits high quality original research papers for the upcoming August edition of the journal. The last date of research paper submission is 22 July 2024

Submit your paper
Know more
Reseach Article

Facial Emotion Recognition and Synthesis with Convolutional Neural Networks

by Karkuzhali S., Murugeshwari R., Umadevi V.
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 186 - Number 11
Year of Publication: 2024
Authors: Karkuzhali S., Murugeshwari R., Umadevi V.
10.5120/ijca2024923159

Karkuzhali S., Murugeshwari R., Umadevi V. . Facial Emotion Recognition and Synthesis with Convolutional Neural Networks. International Journal of Computer Applications. 186, 11 ( Mar 2024), 1-11. DOI=10.5120/ijca2024923159

@article{ 10.5120/ijca2024923159,
author = { Karkuzhali S., Murugeshwari R., Umadevi V. },
title = { Facial Emotion Recognition and Synthesis with Convolutional Neural Networks },
journal = { International Journal of Computer Applications },
issue_date = { Mar 2024 },
volume = { 186 },
number = { 11 },
month = { Mar },
year = { 2024 },
issn = { 0975-8887 },
pages = { 1-11 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume186/number11/facial-emotion-recognition-and-synthesis-with-convolutional-neural-networks/ },
doi = { 10.5120/ijca2024923159 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-03-23T00:18:05.438769+05:30
%A Karkuzhali S.
%A Murugeshwari R.
%A Umadevi V.
%T Facial Emotion Recognition and Synthesis with Convolutional Neural Networks
%J International Journal of Computer Applications
%@ 0975-8887
%V 186
%N 11
%P 1-11
%D 2024
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Facial expressions are a crucial component of human communication, conveying emotions, intentions, and social signals. In this era of artificial intelligence and computer vision, the development of automatic systems for facial expression synthesis and recognition has gained significant attention due to its wide range of applications, including human-computer interaction, virtual reality, emotional analysis, and healthcare. This research focuses on the integration of deep convolutional neural networks (CNNs) to address the challenges associated with both facial expression synthesis and recognition. On the synthesis front, a generative CNN architecture is proposed to synthesize realistic facial expressions, allowing for the generation of various emotional states from neutral faces. The network learns to capture the intricate details of human expressions, including subtle muscle movements and spatial relationships among facial features. For facial expression recognition, a separate CNN-based model is developed to accurately classify these synthesized expressions. The recognition model is trained on a large dataset of annotated facial expressions and is designed to handle real-world variations in lighting, pose, and occlusions. The CNN leverages its ability to automatically learn relevant features from raw image data, eliminating the need for manual feature engineering. The experimental results demonstrate the effectiveness of the proposed approach. The synthesized expressions exhibit a high degree of realism and diversity, effectively capturing the nuances of human emotions. The recognition model achieves state-of-the-art accuracy in classifying these synthesized expressions, surpassing traditional methods and showcasing the power of deep learning in this domain.This research contributes to the advancement of automatic facial expression synthesis and recognition, with potential applications in human-computer interaction, affective computing, and virtual environments. The deep CNN-based approach offers a promising avenue for enhancing our understanding of human expressions and enabling more emotionally aware and responsive AI systems.The significance of emotion classification in human-machine interactions has grown significantly. Over the past decade, businesses have become increasingly attuned to the potential insights that analyzing a person's facial expressions in images or videos can provide regarding their emotional state. Various organizations are currently leveraging emotion recognition to gauge customer sentiments towards their products. The applications of this technology extend well beyond market research and digital advertising. Convolutional Neural Networks (CNNs) have emerged as a valuable tool for eliciting emotions based on facial landmarks, as they have the capability to automatically extract relevant information. Challenges such as variations in brightness, background, and other factors can be effectively mitigated by isolating the essential features through techniques like face resizing and normalization. However, it's important to note that neural networks depend on extensive datasets for optimal performance. In cases where data availability is limited, strategies like data augmentation through techniques such as rotation can be employed to compensate. Additionally, fine-tuning the CNN's architecture can enhance its accuracy in predicting emotions. Consequently, this approach enables the real-time identification of seven distinct emotions – anger, sadness, happiness, disgust, neutrality, fear, and surprise – from facial expressions in images.

References
  1. Corneanu, C., Simn, M., Cohn, J. F., & Guerrero, S. (2016). Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(8), 1548–1568.
  2. Sariyanidi, E., Gunes, H., & Cavallaro, A. (2015). Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(6), 1113–1133.
  3. Soleymani, M., Asghari-Esfeden, S., Fu, Y., & Pantic, M. (2016). Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Computing, 7(1), 17–28.
  4. Zhang, Z., Ping, L., Chen, C., & Tang, X. (2016). From facial expression recognition to interpersonal relation prediction. International Journal of Computer Vision, 126(5), 550–569.
  5. Xie, S., & Hu, H. (2019). Facial expression recognition using hierarchical features with deep comprehensive multipatches aggregation convolutional neural networks. IEEE Transactions on Multimedia, 21(1), 211–220.
  6. Fan, Y., Li, V., & Lam, J. (2020). Facial Expression Recognition with Deeply-Supervised Attention Network. IEEE Transactions on Affective Computing, 1-16.
  7. Iqbal, M., Abdullah-Al-Wadud, M., Ryu, B., Makhmudkhujaev, F., & Chae, O. (2020). Facial Expression Recognition with Neighborhood-Aware Edge Directional Pattern (NEDP). IEEE Transactions on Affective Computing, 11(1), 125–137.
  8. Kumawat, S., Verma, M., & Raman, S. (2019). LBVCNN: local binary volume convolutional neural network for facial expression recognition from image sequences. IEEE Transactions on Computer Vision and Pattern Recognition, 1904.07647.
  9. Tang, Y., Zhang, X., Hu, X., Wang, S., & Wang, H. (2021). Facial Expression Recognition Using Frequency Neural Network. IEEE Transactions on Image Processing, 30, 444–457.
  10. Bailly, K., & Dubuisson, S. (2019). Dynamic Pose-Robust Facial Expression Recognition by Multi-View Pairwise Conditional Random Forests. IEEE Transactions on Image Processing, 30, 167–181.
  11. Yan, Y., Huang, Y., Chen, S., Shen, C., & Wang, H. (2020). Joint Deep Learning of Facial Expression Synthesis and Recognition. IEEE Transactions on Multimedia, 22(11), 2792–2807.
  12. Agarwal, S., & Mukherjee, D. (2019). Synthesis of realistic facial expressions using expression map. IEEE Transactions on Multimedia, 21(4), 902–914.
  13. Huanga, W., Zhanga, S., Zhangc, P., Zhac, Y., Fangd, Y., & Zhangc, Y. (2021). Identity-aware Facial Expression Recognition via Deep Metric Learning based on Synthesized Images. IEEE Transactions on Multimedia, 1424-1445.
  14. Wen, S., Zhang, Y., Li, K., & Qian, M. (2018). Deep Emotion Recognition With Enhanced CNN Features. IEEE Transactions on Affective Computing.
  15. Prates, D. M. G., Penalva, E. M., & Giraldi, G. A. (2017). Facial Expression Recognition with Convolutional Neural Networks: Coping with Few Data and the Training Sample Order. Pattern Recognition Letters.
  16. Islam, A. M. J. S., Siddiquee, M. M., Shoyaib, A. M. A., & Alam, M. S. (2019). Deep Learning-Based Human Emotion Recognition from Facial Expression: A Review. Journal of Ambient Intelligence and Humanized Computing.
  17. Masi, I., Tran, A., Hassner, T., Leksut, J., & Medioni, G. (2016). Facial Expression Recognition in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  18. Wu, X., He, R., Sun, Z., & Tan, T. (2018). A Light CNN for Deep Face Representation with Noisy Labels. IEEE Transactions on Information Forensics and Security.
  19. Kim, K. K., Park, W. T., & Kweon, I. S. (2018). DctNet: Face Recognition Using Discriminant Contextual Representation and Face Anti-Spoofing. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  20. Khorrami, M., Paine, T., & Huang, T. (2017). Do Deep Neural Networks Learn Facial Action Units When Doing Expression Recognition? Proceedings of the IEEE International Conference on Computer Vision (ICCV).
  21. Li, H., Lin, Z., Shen, X., Brandt, J., & Hua, G. (2015). A Convolutional Neural Network Cascade for Face Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  22. Liu, M., Li, S., Shan, S., Wang, R., & Chen, X. (2015). Deeply Learning Deformable Facial Action Parts Model for Dynamic Expression Analysis. Proceedings of the IEEE International Conference on Computer Vision (ICCV).
  23. Song, Y., Li, M., Tao, D., & Sun, X. (2018). Facial Expression Recognition with Incomplete Data. IEEE Transactions on Image Processing.
  24. Yang, B., Cao, J., Ni, R., & Zhang, Y. (2018). Facial Expression Recognition Using Weighted Mixture Deep Neural Network Based on Double-Channel Facial Images. IEEE Access, 6, 4630-4640.
  25. The FER 2013 Dataset. [Online]. Available: https://www.kaggle.com/msambare/fer2013.
  26. Li, Y., Zeng, J., Shan, S., & Chen, X. (2019). Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism. IEEE Transactions On Image Processing, 28(5), 2439-2450.
  27. Zhang, T., Zheng, W., Cui, Z., Zong, Y., Yan, J., & Yan, K. (2016). A Deep Neural Network-Driven Feature Learning Method for Multi-view Facial Expression Recognition. IEEE Transaction on Multimedia, 18(12), 2528-2536.
  28. Isola, P., Zhu, J., Zhou, T., & Efros, A. (2016). Image-to-image translation with conditional adversarial networks. arXiv preprint, pp. 1611.007004.
  29. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. (2018). Improved training of Wasserstein GANs. arXiv preprint, pp. 1703-1711.
  30. Lee, S. H., & Ro, Y. M. (2016). Partial Matching of Facial Expression Sequence Using Over-Complete Transition Dictionary for Emotion Recognition. IEEE Transactions On Affective Computing, 7(4), 387-408.
  31. Tanfous, A. B., Drira, H., & Amor, B. B. (2020). Sparse Coding of Shape Trajectories for Facial Expression and Action Recognition. IEEE Transactions On Pattern Analysis and Machine Intelligence, 42(10), 2594-2607.
  32. Wen, L., Zhou, J., Huang, W., & Chen, F. (2022). A Survey of Facial Capture for Virtual Reality, pp. 6042-6052.
Index Terms

Computer Science
Information Sciences

Keywords

Emotion classification human-machine communication facial expression synthesis deep convolutional neural network emotion recognition