no code implementations • 3 Feb 2025 • YuHang Zhou, Giannis Karamanolakis, Victor Soto, Anna Rumshisky, Mayank Kulkarni, Furong Huang, Wei Ai, Jianhua Lu
The recent success of specialized Large Language Models (LLMs) in domains such as mathematical reasoning and coding has led to growing interest in methods for merging these expert LLMs into a unified Mixture-of-Experts (MoE) model, with the goal of enhancing performance in each domain while retaining effectiveness on general tasks.
no code implementations • 26 Dec 2024 • Yuxin You, Zhen Liu, Xiangchao Wen, Yongtao Zhang, Wei Ai
Future research needs to continue to explore how to efficiently fuse LLMs and GNNs to achieve more powerful graph learning and reasoning capabilities and provide new impetus for the development of graph mining techniques.
no code implementations • 16 Dec 2024 • Yiping Zhang, Yuntao Shou, Wei Ai, Tao Meng, Keqin Li
In addition, to further address the class imbalance problem, we design a dynamic group-aware margin strategy based on reinforcement learning to provide appropriate and unbiased margins for different groups.
no code implementations • 16 Dec 2024 • Tao Meng, Wei Ai, Jianbin Li, Ze Wang, Yuntao Shou, Keqin Li
In particular, we introduce the concept of an event skeleton for core representation semantics and simplify the typically complex data augmentation techniques found in existing graph contrastive learning to boost algorithmic efficiency.
no code implementations • 4 Dec 2024 • Yuntao Shou, Tao Meng, Wei Ai, Keqin Li
Multimodal emotion recognition in conversation (MERC) refers to identifying and classifying human emotional states by combining data from multiple different modalities (e. g., audio, images, text, video, etc.).
Emotion Recognition in Conversation
Multimodal Emotion Recognition
1 code implementation • 29 Nov 2024 • Fangze Fu, Wei Ai, Fan Yang, Yuntao Shou, Tao Meng, Keqin Li
To address these issues, we propose a Spectral Domain Reconstruction Graph Neural Network (SDR-GNN) for incomplete multimodal learning in conversational emotion recognition.
1 code implementation • 25 Nov 2024 • Wei Ai, Jianbin Li, Ze Wang, Yingying Wei, Tao Meng, Yuntao Shou, Keqin Lib
To address these limitations, we propose a novel method of contrastive multi-graph learning with neighbor hierarchical sifting for semi-supervised text classification, namely ConNHS.
no code implementations • 28 Oct 2024 • Wei Ai, Yinghui Gao, Jianbin Li, JiaYi Du, Tao Meng, Yuntao Shou, Keqin Li
Entity alignment is crucial for merging knowledge across knowledge graphs, as it matches entities with identical semantics.
no code implementations • 18 Oct 2024 • Wei Ai, Wen Deng, Hongyi Chen, JiaYi Du, Tao Meng, Yuntao Shou
Multi-modal entity alignment (MMEA) is essential for enhancing knowledge graphs and improving information retrieval and question-answering systems.
no code implementations • 18 Oct 2024 • Wei Ai, Jianbin Li, Ze Wang, JiaYi Du, Tao Meng, Yuntao Shou, Keqin Li
Additionally, we propose a self-correction mechanism to mitigate the loss of true negative samples caused by clustering inconsistency.
1 code implementation • 23 Aug 2024 • Xiaoyu Liu, Jiaxin Yuan, YuHang Zhou, Jingling Li, Furong Huang, Wei Ai
The essence of sequential recommender systems (RecSys) lies in understanding how users make decisions.
no code implementations • 23 Jul 2024 • Tao Meng, FuChen Zhang, Yuntao Shou, HongEn Shao, Wei Ai, Keqin Li
Unlike traditional unimodal emotion recognition, MERC can fuse complementary semantic information between multiple modalities (e. g., text, audio, and vision) to improve emotion recognition.
no code implementations • 23 Jul 2024 • Yiping Zhang, Yuntao Shou, Tao Meng, Wei Ai, Keqin Li
In the feature extraction stage, we introduce a graph structure to construct face images as input and then design a Multi-view Mask Contrastive Learning (MMCL) mechanism to learn complex structural and semantic information about face images.
no code implementations • 27 Jun 2024 • Yuntao Shou, Wei Ai, JiaYi Du, Tao Meng, Haiyan Liu, Nan Yin
Existing methods focus on using graph neural networks (GNN) to model conversational relationships and capture contextual latent semantic relationships.
Ranked #1 on
Emotion Recognition in Conversation
on MELD
1 code implementation • 19 Jun 2024 • YuHang Zhou, Jing Zhu, Paiheng Xu, Xiaoyu Liu, Xiyao Wang, Danai Koutra, Wei Ai, Furong Huang
Large language models (LLMs) have significantly advanced various natural language processing tasks, but deploying them remains computationally expensive.
no code implementations • 8 Jun 2024 • YuHang Zhou, Wei Ai
The first signal is the student's self-consistency (consistency of student multiple outputs), which is a proxy of the student's confidence.
no code implementations • 27 Apr 2024 • Tao Meng, FuChen Zhang, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li
Since consistency and complementarity information correspond to low-frequency and high-frequency information, respectively, this paper revisits the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum.
Ranked #2 on
Emotion Recognition in Conversation
on IEMOCAP
no code implementations • 24 Apr 2024 • Teng Ye, Jingnan Zheng, Junhui Jin, Jingyi Qiu, Wei Ai, Qiaozhu Mei
While small businesses are increasingly turning to online crowdfunding platforms for essential funding, over 40% of these campaigns may fail to raise any money, especially those from low socio-economic areas.
no code implementations • 3 Apr 2024 • Paiheng Xu, Jing Liu, Nathan Jones, Julie Cohen, Wei Ai
Assessing instruction quality is a fundamental component of any improvement efforts in the education system.
no code implementations • 14 Mar 2024 • Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, YuHang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang
Causal inference has shown potential in enhancing the predictive accuracy, fairness, robustness, and explainability of Natural Language Processing (NLP) models by capturing causal relationships among variables.
no code implementations • 22 Feb 2024 • YuHang Zhou, Xuan Lu, Wei Ai
In the rapidly evolving landscape of social media, the introduction of new emojis in Unicode release versions presents a structured opportunity to explore digital language evolution.
no code implementations • 22 Jan 2024 • YuHang Zhou, Paiheng Xu, Xiyao Wang, Xuan Lu, Ge Gao, Wei Ai
Our objective is to validate the hypothesis that ChatGPT can serve as a viable alternative to human annotators in emoji research and that its ability to explain emoji meanings can enhance clarity and transparency in online communications.
no code implementations • 19 Jan 2024 • Wei Ai, CanHao Xie, Tao Meng, Yinghao Wu, Keqin Li
Community search is a derivative of community detection that enables online and personalized discovery of communities and has found extensive applications in massive real-world networks.
no code implementations • 19 Jan 2024 • JiaYi Du, Yinghao Wu, Wei Ai, Tao Meng, CanHao Xie, Keqin Li
Community Search (CS) aims to identify densely interconnected subgraphs corresponding to query vertices within a graph.
no code implementations • 3 Jan 2024 • Wei Ai, FuChen Zhang, Tao Meng, Yuntao Shou, HongEn Shao, Keqin Li
To address the above issues, we propose a two-stage emotion recognition model based on graph contrastive learning (TS-GCL).
no code implementations • 28 Dec 2023 • Yuntao Shou, Tao Meng, Wei Ai, Nan Yin, Keqin Li
However, the existing feature fusion methods have usually mapped the features of different modalities into the same feature space for information fusion, which can not eliminate the heterogeneity between different modalities.
no code implementations • 17 Dec 2023 • Wei Ai, Yuntao Shou, Tao Meng, Nan Yin, Keqin Li
Specifically, we construct a weighted multi-relationship graph to simultaneously capture the dependencies between speakers and event relations in a dialogue.
no code implementations • 11 Dec 2023 • Tao Meng, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li
The main task of Multimodal Emotion Recognition in Conversations (MERC) is to identify the emotions in modalities, e. g., text, audio, image and video, which is a significant development direction for realizing machine intelligence.
no code implementations • 10 Dec 2023 • Yuntao Shou, Tao Meng, Wei Ai, Nan Yin, Keqin Li
Unlike the traditional single-utterance multi-modal emotion recognition or single-modal conversation emotion recognition, MCER is a more challenging problem that needs to deal with more complex emotional interaction relationships.
no code implementations • 5 Dec 2023 • Yuntao Shou, Wei Ai, Tao Meng, Nan Yin
Furthermore, this paper innovatively introduces information bottleneck theory into graph contrastive learning to maximize task-related information while minimizing task-independent redundant information.
no code implementations • 4 Dec 2023 • Yuntao Shou, Wei Ai, Tao Meng, Nan Yin, Keqin Li
However, existing CLIP-based age estimation methods require high memory usage (quadratic complexity) when globally modeling images, and lack an error feedback mechanism to prompt the model about the quality of age prediction results.
1 code implementation • 15 Nov 2023 • YuHang Zhou, Paiheng Xu, Xiaoyu Liu, Bang An, Wei Ai, Furong Huang
We find that LMs, when encountering spurious correlations between a concept and a label in training or prompts, resort to shortcuts for predictions.
no code implementations • 12 Sep 2023 • Ahmed Adel Attia, Jing Liu, Wei Ai, Dorottya Demszky, Carol Espy-Wilson
Recent advancements in Automatic Speech Recognition (ASR) systems, exemplified by Whisper, have demonstrated the potential of these systems to approach human-level performance given sufficient data.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 30 Aug 2023 • YuHang Zhou, Xuan Lu, Ge Gao, Qiaozhu Mei, Wei Ai
In this paper, we study how emoji usage influences developer participation and issue resolution in virtual workspaces.
no code implementations • 1 Jun 2023 • Jing Zhu, YuHang Zhou, Vassilis N. Ioannidis, Shengyi Qian, Wei Ai, Xiang Song, Danai Koutra
While Graph Neural Networks (GNNs) are remarkably successful in a variety of high-impact applications, we demonstrate that, in link prediction, the common practices of including the edges being predicted in the graph at training and/or test have outsized impact on the performance of low-degree nodes.
no code implementations • 25 May 2023 • Paiheng Xu, YuHang Zhou, Bang An, Wei Ai, Furong Huang
Given the growing concerns about fairness in machine learning and the impressive performance of Graph Neural Networks (GNNs) on graph data learning, algorithmic fairness in GNNs has attracted significant attention.
no code implementations • 29 Jan 2023 • Xuan Lu, Wei Ai, Yixin Wang, Qiaozhu Mei
While many organizations have shifted to working remotely during the COVID-19 pandemic, how the remote workforce and the remote teams are influenced by and would respond to this and future shocks remain largely unknown.
no code implementations • 10 Feb 2021 • Xuan Lu, Wei Ai, Zhenpeng Chen, Yanbin Cao, Qiaozhu Mei
This paper studies how emojis, as non-verbal cues in online communications, can be used for such purposes and how the emotional signals in emoji usage can be used to predict future behavior of workers.
no code implementations • 7 Aug 2020 • Teng Ye, Wei Ai, Lingyu Zhang, Ning Luo, Lulu Zhang, Jieping Ye, Qiaozhu Mei
Through interpreting the best-performing models, we discover many novel and actionable insights regarding how to optimize the design and the execution of team competitions on ride-sharing platforms.