no code implementations • 14 Mar 2024 • Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, YuHang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang
Causal inference has shown potential in enhancing the predictive accuracy, fairness, robustness, and explainability of Natural Language Processing (NLP) models by capturing causal relationships among variables.
no code implementations • 22 Feb 2024 • YuHang Zhou, Xuan Lu, Wei Ai
In the rapidly evolving landscape of social media, the introduction of new emojis in Unicode release versions presents a structured opportunity to explore digital language evolution.
no code implementations • 20 Feb 2024 • YuHang Zhou, Yuchen Ni, Xiang Liu, Jian Zhang, Sen Liu, Guangnan Ye, Hongfeng Chai
Large Language Models (LLMs) are progressively being adopted in financial analysis to harness their extensive knowledge base for interpreting complex market data and trends.
no code implementations • 1 Feb 2024 • Xiaowei Fu, YuHang Zhou, Lina Ma, Lei Zhang
Based on this finding, a Pixel Surgery and Semantic Regeneration (PSSR) model following the targeted therapy mechanism is developed, which has three merits: 1) To remove the salient attack, a score-based Pixel Surgery module is proposed, which retains the trivial attack as a kind of invariance information.
no code implementations • 22 Jan 2024 • YuHang Zhou, Paiheng Xu, Xiyao Wang, Xuan Lu, Ge Gao, Wei Ai
Our objective is to validate the hypothesis that ChatGPT can serve as a viable alternative to human annotators in emoji research and that its ability to explain emoji meanings can enhance clarity and transparency in online communications.
1 code implementation • 19 Jan 2024 • Xiyao Wang, YuHang Zhou, Xiaoyu Liu, Hongjin Lu, Yuancheng Xu, Feihong He, Jaehong Yoon, Taixi Lu, Gedas Bertasius, Mohit Bansal, Huaxiu Yao, Furong Huang
However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, which is essential for understanding our ever-changing world, has been less investigated.
no code implementations • 15 Nov 2023 • YuHang Zhou, Paiheng Xu, Xiaoyu Liu, Bang An, Wei Ai, Furong Huang
We find that LMs, when encountering spurious correlations between a concept and a label in training or prompts, resort to shortcuts for predictions.
1 code implementation • 3 Nov 2023 • YuHang Zhou, He Yu, Siyu Tian, Dan Chen, Liuzhi Zhou, Xinlin Yu, Chuanjun Ji, Sen Liu, Guangnan Ye, Hongfeng Chai
While current NL2SQL tasks constructed using Foundation Models have achieved commendable results, their direct application to Natural Language to Graph Query Language (NL2GQL) tasks poses challenges due to the significant differences between GQL and SQL expressions, as well as the numerous types of GQL.
no code implementations • 30 Aug 2023 • YuHang Zhou, Xuan Lu, Ge Gao, Qiaozhu Mei, Wei Ai
In this paper, we study how emoji usage influences developer participation and issue resolution in virtual workspaces.
1 code implementation • 3 Aug 2023 • YuHang Zhou, Jiangchao Yao, Feng Hong, Ya zhang, Yanfeng Wang
By dynamically manipulating the gradient during training based on these factors, BDR can effectively alleviate knowledge destruction and improve knowledge reconstruction.
no code implementations • 1 Jun 2023 • Jing Zhu, YuHang Zhou, Vassilis N. Ioannidis, Shengyi Qian, Wei Ai, Xiang Song, Danai Koutra
While Graph Neural Networks (GNNs) are remarkably successful in a variety of high-impact applications, we demonstrate that, in link prediction, the common practices of including the edges being predicted in the graph at training and/or test have outsized impact on the performance of low-degree nodes.
no code implementations • 25 May 2023 • Paiheng Xu, YuHang Zhou, Bang An, Wei Ai, Furong Huang
Given the growing concerns about fairness in machine learning and the impressive performance of Graph Neural Networks (GNNs) on graph data learning, algorithmic fairness in GNNs has attracted significant attention.
no code implementations • 18 Feb 2023 • YuHang Zhou, Suraj Maharjan, Beiye Liu
In this paper, we propose two methods to automatically design multiple prompts and integrate automatic verbalizer in SSL settings without sacrificing performance.
1 code implementation • 28 Dec 2022 • Zi'an Xu, Yin Dai, Fayu Liu, Weibing Chen, Yue Liu, Lifu Shi, Sheng Liu, YuHang Zhou
The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets.
1 code implementation • 16 Feb 2022 • Oana Ignat, Santiago Castro, YuHang Zhou, Jiajun Bao, Dandan Shan, Rada Mihalcea
We consider the task of temporal human action localization in lifestyle vlogs.
no code implementations • CVPR 2022 • Yuchen Li, Zixuan Li, Siyu Teng, Yu Zhang, YuHang Zhou, Yuchang Zhu, Dongpu Cao, Bin Tian, Yunfeng Ai, Zhe XuanYuan, Long Chen
The main contributions of the AutoMine dataset are as follows: 1. The first autonomous driving dataset for perception and localization in mine scenarios.
no code implementations • 30 Dec 2021 • Chenlin Shen, Guangda Huzhang, YuHang Zhou, Chen Liang, Qing Da
Our algorithm can straightforwardly optimize the linear programming in the prime space, and its solution can be simply applied by a stochastic strategy to fulfill the optimized objective and the constraints in expectation.
no code implementations • 5 Aug 2021 • Shixiang Feng, YuHang Zhou, Xiaoman Zhang, Ya zhang, Yanfeng Wang
A novel Multi-teacher Single-student Knowledge Distillation (MS-KD) framework is proposed, where the teacher models are pre-trained single-organ segmentation networks, and the student model is a multi-organ segmentation network.
no code implementations • 4 Aug 2021 • Liyuan Zhang, YuHang Zhou, Lei Zhang
State-of-the-art deep neural networks (DNNs) have been proved to have excellent performance on unsupervised domain adaption (UDA).
no code implementations • 9 Mar 2021 • YuHang Zhou, Xiaoman Zhang, Shixiang Feng, Ya zhang, Yanfeng
Specifically, given a pretrained $K$ organ segmentation model and a new single-organ dataset, we train a unified $K+1$ organ segmentation model without accessing any data belonging to the previous training stages.
no code implementations • 21 Oct 2020 • Yifan Hu, YuHang Zhou, Jun Xiao, Chao Wu
Federated learning(FL) is a rapidly growing field and many centralized and decentralized FL frameworks have been proposed.
no code implementations • 13 Oct 2020 • Xiaoman Zhang, Shixiang Feng, YuHang Zhou, Ya zhang, Yanfeng Wang
We demonstrate the effectiveness of our methods on two downstream tasks: i) Brain tumor segmentation, ii) Pancreas tumor segmentation.