Convolutional Approaches in Transfer Learning for Facial Emotion Analysis
Keywords:
Emotion detection, Convolutional neural network (CNN), Transfer learning, Deep LearningAbstract
The scientific community has shown significant interest in facial emotion recognition (FER) due to its possible applications. The primary function of Facial Expression Recognition (FER) is to associate various facial expressions with their respective emotional states. Feature extraction and emotion recognition are the primary constituents of conventional FER. The inherent feature extraction capabilities of Deep Neural Networks, particularly Convolutional Neural Networks (CNNs), have resulted in their extensive utilization in Facial Expression Recognition (FER) currently. While previous studies have explored the utilization of multi-layer shallow convolutional neural networks (CNNs) for addressing facial expression recognition (FER) tasks, a significant drawback of these models is their limited capacity to accurately extract features from high-resolution photos. Many of the existing methods also exclude profile views, which are crucial for real-world facial expression recognition (FER) systems, in favor of frontal photographs. This research introduces a highly complex Convolutional Neural Network (CNN) model that incorporates Transfer Learning (TL) to enhance the precision of Facial Expression Recognition (FER). The proposed approach for satisfying the FER criteria involves utilizing a pre-trained DCNN model and fine-tuning it with facial expression data. Subsequently, it substitutes the dense higher layer(s) of the model. To improve the precision of Facial Expression Recognition (FER), a new approach is claimed that consists of iteratively applying the fine-tuning technique on each of the pre-trained DCNN blocks. The validation Facial Expression Recognition FER system of eight DCNN models trained previously, especially VGG-16 and VGG-19 is undertaken. The authentification process is run through two data sets, namely KDEF and JAFFE. But as tricky as the FER is, including the various perspective analysis which is part of the KDEF dataset, the proposed approach is a stand out in the high accuracy that it records. VGG-16 achieved the highest FER accuracies of 93.7% on the KDEF test set and 100% on the JAFFE test set using a 10-fold cross-validation. The assessment emphasizes the benefits of the proposed Facial Emotion Recognition (FER) system, particularly in its ability to reliably detect emotions. It demonstrates promising outcomes on the KDEF dataset, specifically in the context of profile views.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License