Publications

Research Interests: Socially Aware AI, Multimodal AI, Humor Generation and Detection, AI for Mental Health

I work as a research assistant under Prof. Prasenjit Mitra at the Spatial and Language Technologies Lab at CMU Africa, focusing on developing AI systems that understand human communication with social awareness and cultural nuance.

Publications

E. Ajayi, M. Kachweka, M. Deku, and E. Aiken
AAAI AIHealth Bridge 2026 (Oral Presentation) • arXiv

Mental health challenges and cyberbullying are increasingly prevalent in digital spaces, necessitating scalable and interpretable detection systems. This paper introduces a unified multiclass classification framework for detecting ten distinct mental health and cyberbullying categories from social media data. We curate datasets from Twitter and Reddit, implementing a rigorous 'split-then-balance' pipeline to train on balanced data while evaluating on a realistic, held-out imbalanced test set.

We conduct a comprehensive evaluation comparing traditional lexical models, hybrid approaches, and several end-to-end fine-tuned transformers. Our results demonstrate that end-to-end fine-tuning is critical for performance, with the domain-adapted MentalBERT emerging as the top model, achieving an accuracy of 0.92 and a Macro F1 score of 0.76, surpassing both its generic counterpart and a zero-shot LLM baseline.

Grounded in a comprehensive ethical analysis, we frame the system as a human-in-the-loop screening aid, not a diagnostic tool. To support this, we introduce a hybrid SHAP-LLM explainability framework and present a prototype dashboard ("Social Media Screener") designed to integrate model predictions and their explanations into a practical workflow for moderators. Our work provides a robust baseline, highlighting future needs for multi-label, clinically-validated datasets at the critical intersection of online safety and computational mental health.

E. Ajayi and P. Mitra
Submitted to ACM

Automatic humor detection, the task of computationally identifying humorous content, is increasingly critical as Large Language Models (LLMs) become integrated into human communication platforms like chatbots and virtual assistants. However, understanding humor poses significant challenges for AI due to its reliance on complex context, cultural nuances, linguistic ambiguity, and multimodal cues. Current research is fragmented across different humor types, languages, modalities, and evaluation benchmarks, particularly concerning the capabilities and limitations of modern LLMs.

This survey provides a comprehensive synthesis of the automatic humor detection field, tracing its evolution from foundational psychological and linguistic theories through classical machine learning, deep learning, and the recent transformer-based LLM paradigm. We organize and analyze computational methods, feature engineering techniques, benchmark datasets (text-only, multimodal, multilingual), and evaluation metrics. We critically examine LLM adaptation strategies, including fine-tuning, parameter-efficient methods (PEFT), prompt engineering, and multi-task learning, alongside developments in multimodal and cross-lingual humor understanding.

Our analysis reveals that while LLMs demonstrate improved performance in capturing surface humor patterns, significant gaps persist in deep pragmatic reasoning, cultural grounding, multimodal integration, and explainability compared to human cognition. We identify key open challenges, including data scarcity, evaluation inconsistencies, the humor-offensiveness boundary, and the need for more robust, culturally aware, and interpretable models. By consolidating the field's progress and pinpointing critical limitations, this survey aims to guide future interdisciplinary research towards developing more socially intelligent and nuanced AI systems capable of genuinely understanding human humor.

E. Ajayi, B. Tadele, E. Umwari, M. Deku, P. Singadi, C. Edeh, and J. Udahemuka
Preprint, 2025

This study examines the digital representation of African languages and the challenges this presents for current language detection tools. We evaluate their performance on Yoruba, Kinyarwanda, and Amharic. While these languages are spoken by millions, their online usage on conversational platforms is often sparse, heavily influenced by English, and not representative of the authentic, monolingual conversations prevalent among native speakers. This lack of readily available authentic data online creates a challenge of scarcity of conversational data for training language models.

To investigate this, data was collected from subreddits and local news sources for each language. The analysis showed a stark contrast between the two sources. Reddit data was minimal and characterized by heavy code-switching. Conversely, local news media offered a robust source of clean, monolingual language data, which also prompted more user engagement in the local language on the news publishers' social media pages. Language detection models, including the specialized AfroLID and a general LLM, performed with near-perfect accuracy on the clean news data but struggled with the code-switched Reddit posts.

The study concludes that professionally curated news content is a more reliable and effective source for training context-rich AI models for African languages than data from conversational platforms. It also highlights the need for future models that can process clean and code-switched text to improve the detection accuracy for African languages.

* denotes equal contribution

This page will be updated as new publications and research outputs become available. For the most current information about ongoing research projects, please refer to my CV or contact me directly.