MER25@ACM MM and MRAC25@ACM MM


Affective Computing



Meets Large



Language Models



MER2025 is the third year of our MER series of challenges, aiming to bring together researchers in the affective computing community to explore emerging trends and future directions in the field. Previously, MER23 Challenge@ACM MM focused on multi-label learning, noise robustness, and semi-supervised learning, while MER24 Challenge@IJCAI introduced a new track dedicated to open-vocabulary emotion recognition. This year, MER2025 centers on the theme "Affective Computing Meets Large Language Models". We aim to shift the paradigm from traditional categorical frameworks reliant on predefined emotion taxonomies to large language model (LLM)-driven generative methods, offering innovative solutions for more accurate and reliable emotion understanding. The challenge features four tracks: MER-SEMI focuses on fixed categorical emotion recognition enhanced by semi-supervised learning; MER-FG explores fine-grained emotions, expanding recognition from basic to nuanced emotional states; MER-DES incorporates multimodal cues (beyond emotion words) into predictions to enhance model interpretability; MER-PR investigates whether emotion prediction results can improve personality recognition performance. For the first three tracks, baseline code is available at MERTools and datasets can be accessed via Hugging Face. For the last track, the dataset and baseline code are available on GitHub.

News

April 30, 2025: We establish an initial website for MER25 Challenge and MRAC25 Workshop

MER25 Challenge@ACM MM


Track 1. MER-SEMI. MER-SEMI spans three consecutive MER challenges, aiming to enhance the performance of categorical emotion recognition algorithms through semi-supervised learning and unlabeled data. This year, we expanded the dataset by incorporating more labeled and unlabeled samples. Participants are encouraged to leverage semi-supervised learning techniques, such as masked auto-encoders or contrastive learning, to achieve better results.

Track 2. MER-FG. Current frameworks primarily focus on basic emotions, often failing to capture the complexity and subtlety of human emotions. This track shifts the emphasis to fine-grained MER, enabling the prediction of a broader range of emotions. Following previous works [1, 2], participants are encouraged to leverage LLMs for this purpose. Given that LLMs possess extensive vocabularies, they hold the potential to generate more diverse emotion categories beyond basic labels.

Track 3. MER-DES. The first two tracks primarily focus on emotion words, neglecting the integration of multimodal clues during the inference process. This omission results in prediction outcomes that lack interpretability. Moreover, emotion words struggle to fully capture the dynamic, diverse, and sometimes ambiguous nature of human emotions. This track seeks to leverage free-form, natural language descriptions to represent emotions [2, 3], offering greater flexibility to achieve more accurate emotion representations and enhance model interpretability.

Track 4. MER-PR. Personality and emotion are deeply intertwined in human behavior and social interactions, yet current research often treats them as separate tasks, neglecting their inherent correlations. This track seeks to investigate the interplay between emotion and personality, exploring whether emotion recognition can enhance the accuracy of personality predictions. Participants are encouraged to employ techniques such as multi-task learning to analyze the influence of emotion on personality prediction.


Dataset. For the first three tracks, to download the dataset, participants must complete an EULA available on Hugging Face. The EULA clearly states that the dataset is for academic research purposes only and prohibits any modifications or uploads to the Internet. We will review the submitted EULA promptly; once approved, participants will gain access to the dataset. For the last track, the dataset and baseline code are available on GitHub.


Result submission. For MER-SEMI, MER-FG, and MER-DES, the test samples are selected from the 124k unlabeled samples. To reduce the task difficulty, we reduce the evaluation scope from 124k to 20k candidate samples . Participants must submit predictions for these 20k samples, which cover all the test samples of these tracks. Each track has distinct objectives. For MER-SEMI, participants must predict the most likely label from six predefined categories: worried, happy, neutral, angry, surprise, and sad ; For MER-FG, participants can freely predict any emotion labels without restrictions on category or quantity; For MER-DES, participants are required to submit both multimodal evidence and corresponding emotion labels to improve model interpretability. For MER-PR, we provide an official test set, and participants can directly submit predictions for the test set.


Paper submission. All participants are encouraged to submit a paper describing their solution to MRAC25 Workshop.


Baseline paper: https://arxiv.org/abs/2504.19423
Baseline code (Track1~Track3): https://github.com/zeroQiaoba/MERTools/tree/master/MER2025
Baseline code (Track4): https://github.com/cai-cong/MER25_personality
Contact email: merchallenge.contact@gmail.com; lianzheng2016@ia.ac.cn



[1] Zheng Lian, Haiyang Sun, Licai Sun, Haoyu Chen, Lan Chen, Hao Gu, Zhuofan Wen, Shun Chen, Siyuan Zhang, Hailiang Yao, Bin Liu, Rui Liu, Shan Liang, Ya Li, Jiangyan Yi, Jianhua Tao. OV-MER: Towards Open-Vocabulary Multimodal Emotion Recognition. arXiv preprint arXiv:2410.01495, 2024.
[2] Zheng Lian, Haoyu Chen, Lan Chen, Haiyang Sun, Licai Sun, Yong Ren, Zebang Cheng, Bin Liu, Rui Liu, Xiaojiang Peng, Jiangyan Yi, Jianhua Tao. AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models. arXiv preprint arXiv:2501.16566, 2025.
[3] Zheng Lian, Haiyang Sun, Licai Sun, Hao Gu, Zhuofan Wen, Siyuan Zhang, Shun Chen, Mingyu Xu, Ke Xu, Kang Chen, Lan Chen, Shan Liang, Ya Li, Jiangyan Yi, Bin Liu, Jianhua Tao. Explainable Multimodal Emotion Recognition. arXiv preprint arXiv:2306.15401v5, 2024.

MRAC25 Workshop@ACM MM



Besides papers for the MER25 Challenge, we also invite submissions on any aspect of multimodal emotion recognition and synthesis in deep learning. Topics include but not limited to:

  • Large scale data generation or Inexpensive annotation for Affective Computing
  • Generative AI for Affective Computing using multimodal signals
  • Multi-modal method for emotion recognition
  • Privacy preserving large scale emotion recognition in the wild
  • Affective Computing Applications in education, entertainment & healthcare
  • Explainable or Privacy Preserving AI in affective computing
  • Generative and responsible personalization of affective phenomena estimators with few-shot learning
  • Bias in affective computing data (e.g. lack of multi-cultural datasets)
  • Semi-/weak-/un-/self- supervised learning methods and other novel methods for Affective Computing


Format: Submitted papers (.pdf format) must use the ACM Article Template: paper template. Please use the template in traditional double-column format to prepare your submissions. Please comment all the author information for submission and review of manuscript.

Length: The manuscript’s length is limited to one of the two options: a) 4 pages plus 1-page reference; or b) 8 pages plus up to 2-page reference. The reference pages must only contain references. Overlength papers will be rejected without review. We do not allow appendix that follow right after the main paper in the main submission file.

Peer Review and publication in ACM Digital Library: Paper submissions must conform with the “double-blind” review policy. All papers will be peer-reviewed by experts in the field, they will receive at least two reviews. Acceptance will be based on relevance to the workshop, scientific novelty, and technical quality. The workshop papers will be published in the ACM Digital Library.

Contact email: merchallenge.contact@gmail.com; lianzheng2016@ia.ac.cn

Schedule

 

Apr 30, 2025: Data, baseline paper & code available

Jun 26, 2025: Results submission start

Jul 10, 2025: Results submission deadline

Jul 20, 2025: Paper submission deadline

Aug 1, 2025: Paper acceptance notification

Aug 11, 2025: Deadline for camera-ready papers

Oct 27-31, 2025: MRAC25 workshop@ACM MM (Dublin, Ireland)


All submission deadlines are at 23:59 Anywhere on Earth (AoE).

speakers

Haoyu Chen

Assistant Professor
University of Oulu

Title: Coming Soon

Coming Soon

ORGANISERS

 

Jianhua Tao

Tsinghua University

 

Zheng Lian

Institute of Automation, Chinese Academy of Sciences

 

Björn W. Schuller

Technical University of Munich & Imperial College London

 
 

Guoying Zhao

University of Oulu

 

Erik Cambria

Nanyang Technological University

 

Challenge Chairs

 

Rui Liu

Inner Mongolia University

 

Kele Xu

National University of Defense Technology

 

Bin Liu

Institute of Automation, Chinese Academy of Sciences

 

Xuefei Liu

Tianjin Normal University

 

Ya Li

Beijing University of Posts and Telecommunications

 

Jinming Zhao

Qiyuan Lab

Workshop Chairs

 

Yazhou Zhang

Tianjin University

 

Xin Liu

Lappeenranta-Lahti University of Technology

 

Xiaojiang Peng

Shenzhen Technology University

 

Yong Li

Southeast University

 

Xie Chen

Shanghai Jiao Tong University

 

Licai Sun

University of Oulu

Data Chairs

 

Zebang Cheng

Shenzhen University

 

Haolin Zuo

Inner Mongolia University

 

Ziyang Ma

Shanghai Jiao Tong University

 

WeChat Group