When it comes to our moods and emotions, our faces can be very telling. Facial expression is an essential aspect of non-verbal communication in humans. Even if we can’t explain how we do it, we can usually tell from a person’s face how they feel. In many situations, reading facial expressions is particularly important. For example, a teacher might do it to check if their students are engaged or bored, and a nurse might do it to check if a patient’s condition has improved or worsened.
Thanks to advances in technology, computers can do a pretty good job of recognizing faces. Recognizing facial expressions, however, is an entirely different story. Many researchers working in the field of artificial intelligence (AI) have attempted to address this problem using various modeling and classification techniques, including the popular convolutional neural networks (CNNs). Recognizing facial expressions, however, is complex and requires intricate neural networks, which require a lot of training and are computationally expensive.
In an effort to address these issues, a research team led by Dr. Jia Tian of Jilin Engineering Normal University in China recently developed a new CNN model for facial expression recognition. As described in an article published in the Electronic Imaging Magazinethe team focused on striking a good balance between the training speed, memory usage and recognition accuracy of the model.
One of the main differences between conventional CNN models and the one proposed by the team was the use of depth-separable coils. This type of convolution — the core processing performed on each layer of a CNN — differs from the standard in that it independently processes different channels (such as RGB) of the input image and combines the results at the end.
Combining this type of convolution with a technique called “pre-activated residual blocks,” the proposed model was able to process facial expressions in a coarse to fine manner. In this way, the team has significantly reduced the computational costs and the number of parameters required to be learned by the system for an accurate classification. “We managed to obtain a model with good generalization ability with only 58,000 parameters,” Tian said.
The researchers put their model to the test by comparing facial expression recognition to that of other reported models in a classroom. They trained and tested all the models using a popular dataset called the “Extended Cohn-Kanade Dataset,” which contains more than 35,000 labeled images of faces expressing common emotions. The results were encouraging, with the model developed by Tian’s team showing the highest accuracy (72.4%) with the fewest number of parameters.
“The model we developed is especially effective for facial expression recognition when using small sample data sets. The next step in our research is to further optimize the architecture of the model and achieve an even better classification performance,” said Tian.
Since facial recognition can be widely used in areas such as human-computer interactions, safe driving, smart monitoring, surveillance and medicine, let’s hope the team realize their vision soon!
Read the article by J. Tian, J. Fang and Yue Wu, “Recognition of facial expressions in the classroom based on improved Xception model† J. of electronic imaging† 31(5), 051416 (2022), doi 10.1117/1.JEI.31.5.051416†
Electronic Imaging Magazine
Recognition of facial expressions in the classroom based on improved Xception model
Article publication date
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of any press releases posted on EurekAlert! by sponsoring institutions or for the use of information through the EurekAlert system.