Multimodal emotion recognition is an active research topic in artificial intelligence.
Its main goal is to integrate multi-modalities to identify human emotional states. Current works generally assume accurate emotion labels for benchmark datasets and focus on developing more effective architectures.
However, existing technologies are difficult to meet the demand for practical applications.
To this end, we continuously organize MER23 Challenge@ACM MM, MRAC23 Workshop@ACM MM, MER24 Challenge@IJCAI, and MRAC24 Workshop@ACM MM.
In this year, we will continuously holding related workshops and challenges that bring together researchers around the world to further discuss recent research and future directions for robust multimodal emotion recognition.
In this workshop and challenge, we aim to bring together researchers from the fields of multimodal modeling of human affect,
modality robustness of affect recognition, low-resource affect recognition, human affect synthesis in multimedia,
privacy in affective computing, and applications in health, education, entertainment, etc.,
to further discuss recent research and future directions for affective computing in multimedia.
At the same time, we intend to provide a communication platform for all participants of MER24@IJCAI,
to systematically evaluate the robustness of emotion recognition systems and promote applications of this technology in practice.
April 30, 2024: We establish an initial website for MER25 Challenge and MRAC25 Workshop @ ACM MM
April 30, 2025: Data, baseline paper & code available
All submission deadlines are at 23:59 Anywhere on Earth (AoE).