Artificial Intelligence for Affective Computing
Keywords: Affective Computing, Facial Expression Recognition, Deep Learning
With the broader application of computers in daily life, how to design a more user-friendly interface triggers a heated discussion. Enhancing the capability of computers to analyze human behaviours actively and respond accordingly is a field worth exploring. As the expression of subjective awareness, emotion is one of the information sources. Affective computing empowers computers to perceive emotions through various modalities, such as text and audio. Facial expressions with a strong expressive force provide more information for emotion induction. A facial expression recognition system uses computer vision techniques to identify and measure the intensity of emotion. However, only some studies have been conducted on multifaceted expression recognition. Furthermore, existing deep learning-based frameworks have relatively high environmental requirements, which do not apply to edge devices with limited resources. In this project, we proposed a lightweight method that can simultaneously recognize emotions from multiple people and established a dataset containing samples with multiple facial expressions. Equipped with a transformer-based knowledge distillation technique, the system adopts RetinaFace to detect faces and Swin Transformer V2 pre-trained on FER2013 to recognize facial expressions. The evaluation results on the in-the-wild RAF-DB dataset and simulation tests on our newly constructed multifaced expression recognition dataset show that the designed system can achieve performance close to the state-of-the-art algorithms while maintaining a relatively small model size. Moreover, we evaluated the effect of different components on performance in ablation studies, examined the ability of our system to identify each emotion and visualized the generated attention maps.
For more information, please see the poster.