1 code implementation • Findings (EMNLP) 2021 • Zhenwen Liang, Xiangliang Zhang
Many existing works have demonstrated that language is a helpful guider for image understanding by neural networks.
1 code implementation • LREC 2022 • Reem Alghamdi, Zhenwen Liang, Xiangliang Zhang
In addition, a transfer learning model is built to let the high-resource Chinese MWP solver promote the performance of the low-resource Arabic MWP solver.
no code implementations • NAACL (GeBNLP) 2022 • Xiuying Chen, Mingzhe Li, Rui Yan, Xin Gao, Xiangliang Zhang
Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases. However, debias on the Chinese language, one of the most spoken languages, has been less explored. Meanwhile, existing literature relies on manually created supplementary data, which is time- and energy-consuming. In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data. Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i. e., a kind of component in Chinese characters, during the training procedure. This consequently alleviates discriminative gender biases. Experimental results on public benchmark datasets show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model.
no code implementations • 19 Feb 2025 • Yicheng Lang, Kehan Guo, Yue Huang, Yujun Zhou, Haomin Zhuang, Tianyu Yang, Yao Su, Xiangliang Zhang
Due to the widespread use of LLMs and the rising critical ethical and safety concerns, LLM unlearning methods have been developed to remove harmful knowledge and undesirable capabilities.
1 code implementation • 3 Feb 2025 • Dawei Li, Renliang Sun, Yue Huang, Ming Zhong, Bohan Jiang, Jiawei Han, Xiangliang Zhang, Wei Wang, Huan Liu
All of these findings imply that preference leakage is a widespread and challenging problem in the area of LLM-as-a-judge.
no code implementations • 22 Dec 2024 • Lang Gao, Xiangliang Zhang, Preslav Nakov, Xiuying Chen
In particular, we introduce \textit{safety boundary}, and we find that jailbreaks shift harmful activations outside that safety boundary, where LLMs are less sensitive to harmful information.
no code implementations • 9 Dec 2024 • Lincan Li, Jiaqi Li, Catherine Chen, Fred Gui, Hongjia Yang, Chenxiao Yu, Zhengguang Wang, Jianing Cai, Junlong Aaron Zhou, Bolin Shen, Alex Qian, Weixin Chen, Zhongkai Xue, Lichao Sun, Lifang He, Hanjie Chen, Kaize Ding, Zijian Du, Fangzhou Mu, Jiaxin Pei, Jieyu Zhao, Swabha Swayamdipta, Willie Neiswanger, Hua Wei, Xiyang Hu, Shixiang Zhu, Tianlong Chen, Yingzhou Lu, Yang Shi, Lianhui Qin, Tianfan Fu, Zhengzhong Tu, Yuzhe Yang, Jaemin Yoo, Jiaheng Zhang, Ryan Rossi, Liang Zhan, Liang Zhao, Emilio Ferrara, Yan Liu, Furong Huang, Xiangliang Zhang, Lawrence Rothenberg, Shuiwang Ji, Philip S. Yu, Yue Zhao, Yushun Dong
In recent years, large language models (LLMs) have been widely adopted in political science tasks such as election prediction, sentiment analysis, policy impact assessment, and misinformation detection.
no code implementations • 27 Nov 2024 • Haomin Zhuang, Yihua Zhang, Kehan Guo, Jinghan Jia, Gaowen Liu, Sijia Liu, Xiangliang Zhang
As MoE LLMs are celebrated for their exceptional performance and highly efficient inference processes, we ask: How can unlearning be performed effectively and efficiently on MoE LLMs?
no code implementations • 7 Nov 2024 • Tianyu Yang, Yiyang Nan, Lisen Dai, Zhenwen Liang, Yapeng Tian, Xiangliang Zhang
Audio-Visual Question Answering (AVQA) is a challenging task that involves answering questions based on both auditory and visual information in videos.
Audio-visual Question Answering
Audio-Visual Question Answering (AVQA)
+2
no code implementations • 30 Oct 2024 • Tianyu Yang, Lisen Dai, Zheyuan Liu, Xiangqi Wang, Meng Jiang, Yapeng Tian, Xiangliang Zhang
Machine unlearning (MU) has gained significant attention as a means to remove specific data from trained models without requiring a full retraining process.
no code implementations • 30 Oct 2024 • Yue Huang, Zhengqing Yuan, Yujun Zhou, Kehan Guo, Xiangqi Wang, Haomin Zhuang, Weixiang Sun, Lichao Sun, Jindong Wang, Yanfang Ye, Xiangliang Zhang
To address this, we introduce TrustSim, an evaluation dataset covering 10 CSS-related topics, to systematically investigate the reliability of the LLM simulation.
1 code implementation • 28 Oct 2024 • Han Bao, Yue Huang, Yanbo Wang, Jiayi Ye, Xiangqi Wang, Xiuying Chen, Yue Zhao, Tianyi Zhou, Mohamed Elhoseiny, Xiangliang Zhang
Large Vision-Language Models (LVLMs) have become essential for advancing the integration of visual and linguistic information.
no code implementations • 25 Oct 2024 • Taicheng Guo, Chaochun Liu, Hai Wang, Varun Mannam, Fang Wang, Xin Chen, Xiangliang Zhang, Chandan K. Reddy
Our key insight is that the paths in a KG can capture complex relationships between users and items, eliciting the underlying reasons for user preferences and enriching user profiles.
no code implementations • 18 Oct 2024 • Yujun Zhou, Jingdong Yang, Kehan Guo, Pin-Yu Chen, Tian Gao, Werner Geyer, Nuno Moniz, Nitesh V Chawla, Xiangliang Zhang
With the increasing reliance on large language models (LLMs) for guidance in various fields, including laboratory settings, there is a growing concern about their reliability in critical safety-related decision-making.
no code implementations • 5 Oct 2024 • Zhenwen Liang, Ye Liu, Tong Niu, Xiangliang Zhang, Yingbo Zhou, Semih Yavuz
Moreover, to leverage the unique strengths of different reasoning strategies, we propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
no code implementations • 3 Oct 2024 • Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, Xiangliang Zhang
LLM-as-a-Judge has been widely utilized as an evaluation method in various benchmarks and served as supervised rewards in model training.
no code implementations • 24 Jul 2024 • Xiuying Chen, Tairan Wang, Taicheng Guo, Kehan Guo, Juexiao Zhou, Haoyang Li, Mingchen Zhuge, Jürgen Schmidhuber, Xin Gao, Xiangliang Zhang
We hope our benchmark and model can facilitate and promote more research on chemical QA.
no code implementations • 16 Jul 2024 • Xiaochuan Gou, Ziyue Li, Tian Lan, Junpeng Lin, Zhishuai Li, Bingyu Zhao, Chen Zhang, Di Wang, Xiangliang Zhang
Our data can revolutionalize traditional traffic-related tasks towards higher interpretability and practice: instead of traditional prediction or classification tasks, we conduct: (1) post-incident traffic forecasting to quantify the impact of different incidents on traffic indexes; (2) incident classification using traffic indexes to determine the incidents types for precautions measures; (3) global causal analysis among the traffic indexes, meta-attributes, and incidents to give high-level guidance of the interrelations of various factors; (4) local causal analysis within road nodes to examine how different incidents affect the road segments' relations.
no code implementations • 3 Jul 2024 • Qiang Yang, Xiuying Chen, Changsheng Ma, Carlos M. Duarte, Xiangliang Zhang
The automatic classification of animal sounds presents an enduring challenge in bioacoustics, owing to the diverse statistical properties of sound signals, variations in recording equipment, and prevalent low Signal-to-Noise Ratio (SNR) conditions.
1 code implementation • 27 Jun 2024 • Siyuan Wu, Yue Huang, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Xiangliang Zhang, Jianfeng Gao, Chaowei Xiao, Lichao Sun
Large Language Models (LLMs) such as GPT-4 and Llama3 have significantly impacted various fields by enabling high-quality synthetic data generation and reducing dependence on expensive human-generated datasets.
no code implementations • 25 Jun 2024 • Yuan Li, Yue Huang, Hongyi Wang, Xiangliang Zhang, James Zou, Lichao Sun
Inspired by psychometrics, this paper presents a framework for investigating psychology in LLMs, including psychological dimension identification, assessment dataset curation, and assessment with results validation.
no code implementations • 20 Jun 2024 • Yue Huang, Chenrui Fan, Yuan Li, Siyuan Wu, Tianyi Zhou, Xiangliang Zhang, Lichao Sun
This paper introduces a method to enhance the multilingual performance of LLMs by aggregating knowledge from diverse languages.
1 code implementation • 19 Jun 2024 • Yue Huang, Jingyu Tang, Dongping Chen, Bingda Tang, Yao Wan, Lichao Sun, Philip S. Yu, Xiangliang Zhang
Recently, Large Language Models (LLMs) have garnered significant attention for their exceptional natural language processing capabilities.
no code implementations • 10 Jun 2024 • Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Yongsheng Zhu, Guangquan Xu, Jiqiang Liu, Xiangliang Zhang
Furthermore, PFL systems can also deploy both server-end and client-end defense mechanisms to strengthen the barrier against backdoor attacks.
no code implementations • 10 Jun 2024 • Khiem Le, Zhichun Guo, Kaiwen Dong, Xiaobao Huang, Bozhao Nan, Roshni Iyer, Xiangliang Zhang, Olaf Wiest, Wei Wang, Nitesh V. Chawla
Large Language Models (LLMs) with their strong task-handling capabilities have shown remarkable advancements across a spectrum of fields, moving beyond natural language understanding.
1 code implementation • 8 Jun 2024 • Xiuying Chen, Mingzhe Li, Shen Gao, Xin Cheng, Qingqing Zhu, Rui Yan, Xin Gao, Xiangliang Zhang
Our model's distinct separation of general and domain-specific summarization abilities grants it with notable flexibility and adaptability, all while maintaining parameter efficiency.
no code implementations • 8 Jun 2024 • Xiuying Chen, Shen Gao, Mingzhe Li, Qingqing Zhu, Xin Gao, Xiangliang Zhang
Hence, in this paper, we propose the task of Stepwise Summarization, which aims to generate a new appended summary each time a new document is proposed.
1 code implementation • 1 Jun 2024 • Chujie Gao, Siyuan Wu, Yue Huang, Dongping Chen, Qihui Zhang, Zhengyan Fu, Yao Wan, Lichao Sun, Xiangliang Zhang
Subsequently, we present two approaches to augmenting honesty and helpfulness in LLMs: a training-free enhancement and a fine-tuning-based improvement.
1 code implementation • 29 May 2024 • Zhenwen Liang, Dian Yu, Wenhao Yu, Wenlin Yao, Zhihan Zhang, Xiangliang Zhang, Dong Yu
We evaluate the performance of various SOTA LLMs on the MathChat benchmark, and we observe that while these models excel in single turn question answering, they significantly underperform in more complex scenarios that require sustained reasoning and dialogue understanding.
1 code implementation • 28 May 2024 • Xiaoting Lyu, Yufei Han, Wei Wang, Hangwei Qian, Ivor Tsang, Xiangliang Zhang
Graph Prompt Learning (GPL) bridges significant disparities between pretraining and downstream applications to alleviate the knowledge transfer bottleneck in real-world graph learning.
no code implementations • 9 Apr 2024 • Rui Cai, Shichao Pei, Xiangliang Zhang
Relational learning is an essential task in the domain of knowledge representation, particularly in knowledge graph completion (KGC).
no code implementations • 9 Apr 2024 • Yi Gui, Zhen Li, Yao Wan, Yemin Shi, Hongyu Zhang, Yi Su, Bohua Chen, Dongping Chen, Siyuan Wu, Xing Zhou, Wenbin Jiang, Hai Jin, Xiangliang Zhang
The benchmarking results demonstrate that our dataset significantly improves the ability of MLLMs to generate code from webpage designs, confirming its effectiveness and usability for future applications in front-end design tools.
no code implementations • 22 Feb 2024 • Xiuying Chen, Tairan Wang, Qingqing Zhu, Taicheng Guo, Shen Gao, Zhiyong Lu, Xin Gao, Xiangliang Zhang
Our findings confirm that FM offers a more logical approach to evaluating scientific summaries.
no code implementations • 20 Feb 2024 • Yujun Zhou, Yufei Han, Haomin Zhuang, Kehan Guo, Zhenwen Liang, Hongyan Bao, Xiangliang Zhang
Large Language Models (LLMs) demonstrate remarkable capabilities across diverse applications.
no code implementations • 12 Feb 2024 • Yijun Tian, Chuxu Zhang, Ziyi Kou, Zheyuan Liu, Xiangliang Zhang, Nitesh V. Chawla
In light of this, we propose UGMAE, a unified framework for graph masked autoencoders to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency.
no code implementations • 6 Feb 2024 • Zhenwen Liang, Kehan Guo, Gang Liu, Taicheng Guo, Yujun Zhou, Tianyu Yang, Jiajun Jiao, Renjie Pi, Jipeng Zhang, Xiangliang Zhang
The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level.
no code implementations • 6 Feb 2024 • Yihong Ma, Xiaobao Huang, Bozhao Nan, Nuno Moniz, Xiangliang Zhang, Olaf Wiest, Nitesh V. Chawla
The yield of a chemical reaction quantifies the percentage of the target product formed in relation to the reactants consumed during the chemical reaction.
no code implementations • 31 Jan 2024 • Xiaodong Wu, Yufei Han, Hayssam Dahrouj, Jianbing Ni, Zhenwen Liang, Xiangliang Zhang
Machine teaching often involves the creation of an optimal (typically minimal) dataset to help a model (referred to as the `student') achieve specific goals given by a teacher.
1 code implementation • 21 Jan 2024 • Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang
To provide the community with an overview of this dynamic field, we present this survey to offer an in-depth discussion on the essential aspects of multi-agent systems based on LLMs, as well as the challenges.
1 code implementation • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
no code implementations • 7 Oct 2023 • Taicheng Guo, Changsheng Ma, Xiuying Chen, Bozhao Nan, Kehan Guo, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang
Reaction prediction, a critical task in synthetic chemistry, is to predict the outcome of a reaction based on given reactants.
no code implementations • 16 Jul 2023 • Zhenwen Liang, Dian Yu, Xiaoman Pan, Wenlin Yao, Qingkai Zeng, Xiangliang Zhang, Dong Yu
Our approach uniquely considers the various annotation formats as different "views" and leverages them in training the model.
1 code implementation • 1 Jun 2023 • Xiuying Chen, Guodong Long, Chongyang Tao, Mingzhe Li, Xin Gao, Chengqi Zhang, Xiangliang Zhang
The other factor is in the latent space, where the attacked inputs bring more variations to the hidden states.
1 code implementation • NeurIPS 2023 • Taicheng Guo, Kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang
In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain.
no code implementations • 22 May 2023 • Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, Ashwin Kaylan
In this paper, we present a novel approach for distilling math word problem solving capabilities from large language models (LLMs) into smaller, more efficient student models.
no code implementations • 19 May 2023 • Xiuying Chen, Mingzhe Li, Shen Gao, Xin Cheng, Qiang Yang, Qishen Zhang, Xin Gao, Xiangliang Zhang
To address these two challenges, we first propose a unified topic encoder, which jointly discovers latent topics from the document and various kinds of side information.
1 code implementation • 23 Apr 2023 • Zhenwei Tang, Griffin Floto, Armin Toroghi, Shichao Pei, Xiangliang Zhang, Scott Sanner
In this work, we formulate the problem of recommendation with users' logical requirements (LogicRec) and construct benchmark datasets for LogicRec.
no code implementations • 17 Mar 2023 • Xiuying Chen, Mingzhe Li, Jiayi Zhang, Xiaoqiang Xia, Chen Wei, Jianwei Cui, Xin Gao, Xiangliang Zhang, Rui Yan
As it is cumbersome and expensive to acquire a huge amount of data for training neural dialog models, data augmentation is proposed to effectively utilize existing training samples.
no code implementations • 1 Feb 2023 • Yijun Tian, Shichao Pei, Xiangliang Zhang, Chuxu Zhang, Nitesh V. Chawla
Therefore, to improve the applicability of GNNs and fully encode the complicated topological information, knowledge distillation on graphs (KDG) has been introduced to build a smaller yet effective model and exploit more knowledge from data, leading to model compression and performance improvement.
1 code implementation • 2 Jan 2023 • Xiuying Chen, Mingzhe Li, Shen Gao, Zhangming Chan, Dongyan Zhao, Xin Gao, Xiangliang Zhang, Rui Yan
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline.
no code implementations • 13 Dec 2022 • Hongyan Bao, Yufei Han, Yujun Zhou, Xin Gao, Xiangliang Zhang
Our work targets at searching feasible adversarial perturbation to attack a classifier with high-dimensional categorical inputs in a domain-agnostic setting.
no code implementations • 13 Dec 2022 • Helene Orsini, Hongyan Bao, Yujun Zhou, Xiangrui Xu, Yufei Han, Longyang Yi, Wei Wang, Xin Gao, Xiangliang Zhang
Machine Learning-as-a-Service systems (MLaaS) have been largely developed for cybersecurity-critical applications, such as detecting network intrusions and fake news campaigns.
no code implementations • 8 Dec 2022 • Xiuying Chen, Mingzhe Li, Shen Gao, Rui Yan, Xin Gao, Xiangliang Zhang
We first propose a Multi-granularity Unsupervised Summarization model (MUS) as a simple and low-cost solution to the task.
1 code implementation • 1 Dec 2022 • Zhenwen Liang, Jipeng Zhang, Lei Wang, Yan Wang, Jie Shao, Xiangliang Zhang
In this paper, we design a new training framework for an MWP solver by introducing a solution buffer and a solution discriminator.
1 code implementation • 1 Dec 2022 • Zhenwen Liang, Jipeng Zhang, Xiangliang Zhang
In this paper, we propose to build a novel MWP solver by leveraging analogical MWPs, which advance the solver's generalization ability across different kinds of MWPs.
no code implementations • 19 Nov 2022 • Youssef Mohamed, Mohamed Abdelfattah, Shyma Alhuwaider, Feifan Li, Xiangliang Zhang, Kenneth Ward Church, Mohamed Elhoseiny
This paper introduces ArtELingo, a new benchmark and dataset, designed to encourage work on diversity across languages and cultures.
no code implementations • 19 Nov 2022 • Lin Xiao, Pengyu Xu, Liping Jing, Xiangliang Zhang
In response, we propose a Pairwise Instance Relation Augmentation Network (PIRAN) to augment tailed-label documents for balancing tail labels and head labels.
1 code implementation • 4 Oct 2022 • Xiuying Chen, Mingzhe Li, Xin Gao, Xiangliang Zhang
The evaluation of factual consistency also shows that our model generates more faithful summaries than baselines.
2 code implementations • 22 Aug 2022 • Yijun Tian, Chuxu Zhang, Zhichun Guo, Xiangliang Zhang, Nitesh V. Chawla
Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features using labels derived from trained GNNs.
1 code implementation • 16 Aug 2022 • Tilman Hinnerichs, Zhenwei Tang, Xi Peng, Xiangliang Zhang, Robert Hoehndorf
Ontologies are one of the richest sources of knowledge.
1 code implementation • 28 Jul 2022 • Taicheng Guo, Lu Yu, Basem Shihada, Xiangliang Zhang
Second, the user preference over these topics is transferable across different platforms.
1 code implementation • 8 Jul 2022 • Zhichun Guo, Kehan Guo, Bozhao Nan, Yijun Tian, Roshni G. Iyer, Yihong Ma, Olaf Wiest, Xiangliang Zhang, Wei Wang, Chuxu Zhang, Nitesh V. Chawla
Recently, MRL has achieved considerable progress, especially in methods based on deep molecular graph learning.
Ranked #25 on
Molecule Captioning
on ChEBI-20
no code implementations • 29 May 2022 • Zhenwei Tang, Shichao Pei, Xi Peng, Fuzhen Zhuang, Xiangliang Zhang, Robert Hoehndorf
Neural logical reasoning (NLR) is a fundamental task to explore such knowledge bases, which aims at answering multi-hop queries with logical operations based on distributed representations of queries and answers.
1 code implementation • 26 May 2022 • Xiuying Chen, Hind Alamro, Mingzhe Li, Shen Gao, Rui Yan, Xin Gao, Xiangliang Zhang
The related work section is an important component of a scientific paper, which highlights the contribution of the target paper in the context of the reference papers.
no code implementations • 2 May 2022 • Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, Xiangliang Zhang
Most real-world knowledge graphs (KG) are far from complete and comprehensive.
no code implementations • 17 Mar 2022 • Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla, Huan Liu
In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge.
no code implementations • 30 Dec 2021 • Yingquan Li, Zhenwen Liang, Ibrahima N'Doye, Xiangliang Zhang, Mohamed-Slim Alouini, Taous-Meriem Laleg-Kirati
Light-Emitting Diodes (LEDs) based underwater optical wireless communications (UOWCs), a technology with low latency and high data rates, have attracted significant importance for underwater robots.
no code implementations • 7 Nov 2021 • Runmin Wang, Guoxian Yu, Lei Liu, Lizhen Cui, Carlotta Domeniconi, Xiangliang Zhang
Cross-modal hashing (CMH) is one of the most promising methods in cross-modal approximate nearest neighbor search.
no code implementations • 7 Nov 2021 • Guangyang Han, Guoxian Yu, Lizhen Cui, Carlotta Domeniconi, Xiangliang Zhang
Due to the unreliability of Internet workers, it's difficult to complete a crowdsourcing project satisfactorily, especially when the tasks are multiple and the budget is limited.
no code implementations • 7 Nov 2021 • Runmin Wang, Guoxian Yu, Carlotta Domeniconi, Xiangliang Zhang
Due to the lack of training samples in the tail classes, MetaCMH first learns direct features from data in different modalities, and then introduces an associative memory module to learn the memory features of samples of the tail classes.
no code implementations • 7 Nov 2021 • Guangyang Han, Guoxian Yu, Lei Liu, Lizhen Cui, Carlotta Domeniconi, Xiangliang Zhang
First, OSCrowd integrates crowd theme related datasets into a large source domain to facilitate partial transfer learning to approximate the label space inference of these tasks.
no code implementations • ICLR 2022 • Hongyan Bao, Yufei Han, Yujun Zhou, Yun Shen, Xiangliang Zhang
Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem.
no code implementations • 29 Sep 2021 • Uchenna Akujuobi, Xiangliang Zhang, Sucheendra Palaniappan, Michael Spranger
In this paper, we study the automatic hypothesis generation (HG) problem, focusing on explainability.
no code implementations • 29 Sep 2021 • Hind Alamro, Manal Alshehri, Basma Alharbi, Zuhair Khayyat, Manal Kalkatawi, Inji Ibrahim Jaber, Xiangliang Zhang
From our recently released ASAD dataset, we provide the competitors with 55K tweets for training, and 20K tweets for validation, based on which the performance of participating teams are ranked on a leaderboard, https://www. kaggle. com/c/arabic-sentiment-analysis-2021-kaust.
no code implementations • 29 Sep 2021 • Lu Yu, Shichao Pei, Chuxu Zhang, Xiangliang Zhang
Pairwise ranking models have been widely used to address various problems, such as recommendation.
no code implementations • 8 Sep 2021 • Dan Su, Jiqiang Liu, Sencun Zhu, Xiaoyang Wang, Wei Wang, Xiangliang Zhang
In this work, we propose AppQ, a novel app quality grading and recommendation system that extracts inborn features of apps based on app source code.
no code implementations • 17 Aug 2021 • Wenbin Zhang, Albert Bifet, Xiangliang Zhang, Jeremy C. Weiss, Wolfgang Nejdl
This algorithm, called FARF (Fair and Adaptive Random Forests), is based on using online component classifiers and updating them according to the current distribution, that also accounts for fairness and a single hyperparameters that alters fairness-accuracy balance.
1 code implementation • ACL 2021 • Xiuying Chen, Hind Alamro, Mingzhe Li, Shen Gao, Xiangliang Zhang, Dongyan Zhao, Rui Yan
Hence, in this paper, we propose a Relation-aware Related work Generator (RRG), which generates an abstractive related work from the given multiple scientific papers in the same research area.
1 code implementation • Findings (NAACL) 2022 • Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao, Xiangliang Zhang
Math word problem (MWP) solving faces a dilemma in number representation learning.
Ranked #5 on
Math Word Problem Solving
on MathQA
1 code implementation • 29 Jun 2021 • Zhuo Yang, Yufei Han, Xiangliang Zhang
We unveil how the transferability level of the attack determines the attackability of the classifier via establishing an information-theoretic analysis of the adversarial risk.
no code implementations • 7 Jun 2021 • Basmah Altaf, Shichao Pei, Xiangliang Zhang
Data intensive research requires the support of appropriate datasets.
1 code implementation • 7 Jun 2021 • Junliang Yu, Hongzhi Yin, Min Gao, Xin Xia, Xiangliang Zhang, Nguyen Quoc Viet Hung
Under this scheme, only a bijective mapping is built between nodes in two different views, which means that the self-supervision signals from other nodes are being neglected.
no code implementations • 5 Apr 2021 • Tong Chen, Hongzhi Yin, Xiangliang Zhang, Zi Huang, Yang Wang, Meng Wang
As a well-established approach, factorization machine (FM) is capable of automatically learning high-order interactions among features to make predictions without the need for manual feature engineering.
no code implementations • 4 Apr 2021 • Tong Chen, Hongzhi Yin, Jie Ren, Zi Huang, Xiangliang Zhang, Hao Wang
In WIDEN, we propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
no code implementations • 2 Apr 2021 • Qinyong Wang, Hongzhi Yin, Tong Chen, Junliang Yu, Alexander Zhou, Xiangliang Zhang
In the mobile Internet era, the recommender system has become an irreplaceable tool to help users discover useful items, and thus alleviating the information overload problem.
no code implementations • 24 Mar 2021 • Lei Guo, Hongzhi Yin, Tong Chen, Xiangliang Zhang, Kai Zheng
However, the representation learning for a group is most complex beyond the fusion of group member representation, as the personal preferences and group preferences may be in different spaces.
no code implementations • 29 Jan 2021 • Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, Xiangliang Zhang
Specifically, in GERAI, we bind the information perturbation mechanism in differential privacy with the recommendation capability of graph convolutional networks.
no code implementations • 27 Jan 2021 • Yongchun Zhu, Fuzhen Zhuang, Xiangliang Zhang, Zhiyuan Qi, Zhiping Shi, Juan Cao, Qing He
However, in real-world applications, few-shot learning paradigm often suffers from data shift, i. e., samples in different tasks, even in the same task, could be drawn from various data distributions.
1 code implementation • 24 Jan 2021 • Lin Xiao, Xiangliang Zhang, Liping Jing, Chi Huang, Mingyang Song
To address the challenge of insufficient training data on tail label classification, we propose a Head-to-Tail Network (HTTN) to transfer the meta-knowledge from the data-rich head labels to data-poor tail labels.
4 code implementations • 16 Jan 2021 • Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang
In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations.
no code implementations • 8 Jan 2021 • Guanhua Ye, Hongzhi Yin, Tong Chen, Hongxu Chen, Lizhen Cui, Xiangliang Zhang
Obstructive Sleep Apnea (OSA) is a highly prevalent but inconspicuous disease that seriously jeopardizes the health of human beings.
no code implementations • 17 Dec 2020 • Zhuo Yang, Yufei Han, Xiangliang Zhang
Evasion attack in multi-label learning systems is an interesting, widely witnessed, yet rarely explored research topic.
2 code implementations • 12 Dec 2020 • Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Lizhen Cui, Xiangliang Zhang
Moreover, to enhance hypergraph modeling, we devise another graph convolutional network which is based on the line graph of the hypergraph and then integrate self-supervised learning into the training of the networks by maximizing mutual information between the session representations learned via the two networks, serving as an auxiliary task to improve the recommendation task.
no code implementations • 1 Nov 2020 • Basma Alharbi, Hind Alamro, Manal Alshehri, Zuhair Khayyat, Manal Kalkatawi, Inji Ibrahim Jaber, Xiangliang Zhang
This paper provides a detailed description of a new Twitter-based benchmark dataset for Arabic Sentiment Analysis (ASAD), which is launched in a competition3, sponsored by KAUST for awarding 10000 USD, 5000 USD and 2000 USD to the first, second and third place winners, respectively.
no code implementations • 28 Oct 2020 • Waqas W. Ahmed, Mohamed Farhat, Xiangliang Zhang, Ying Wu
Concealing an object from incoming waves (light and/or sound) remained science fiction for a long time due to the absence of wave-shielding materials in nature.
Applied Physics Computational Physics
no code implementations • 6 Oct 2020 • Yuanlin Yang, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Xiangliang Zhang
Multi-typed objects Multi-view Multi-instance Multi-label Learning (M4L) deals with interconnected multi-typed objects (or bags) that are made of diverse instances, represented with heterogeneous feature views and annotated with a set of non-exclusive but semantically related labels.
no code implementations • NeurIPS 2020 • Uchenna Akujuobi, Jun Chen, Mohamed Elhoseiny, Michael Spranger, Xiangliang Zhang
Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data.
no code implementations • 2 Oct 2020 • Shaowei Wei, Jun Wang, Guoxian Yu, Carlotta Domeniconi, Xiangliang Zhang
Multi-view clustering aims at exploiting information from multiple heterogeneous views to promote clustering.
no code implementations • 2 Sep 2020 • Lu Yu, Shichao Pei, Lizhong Ding, Jun Zhou, Longfei Li, Chuxu Zhang, Xiangliang Zhang
This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario.
no code implementations • 25 Aug 2020 • Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtárik
Then, we show that PAGE obtains the optimal convergence results $O(n+\frac{\sqrt{n}}{\epsilon^2})$ (finite-sum) and $O(b+\frac{\sqrt{b}}{\epsilon^2})$ (online) matching our lower bounds for both nonconvex finite-sum and online problems.
no code implementations • 12 Jul 2020 • Dongbo Xi, Fuzhen Zhuang, Yongchun Zhu, Pengpeng Zhao, Xiangliang Zhang, Qing He
In this paper, we propose a Graph Factorization Machine (GFM) which utilizes the popular Factorization Machine to aggregate multi-order interactions from neighborhood for recommendation.
2 code implementations • 18 Jun 2020 • Qiang Yang, Hind Alamro, Somayah Albaradei, Adil Salhi, Xiaoting Lv, Changsheng Ma, Manal Alshehri, Inji Jaber, Faroug Tifratene, Wei Wang, Takashi Gojobori, Carlos M. Duarte, Xin Gao, Xiangliang Zhang
Since the first alert launched by the World Health Organization (5 January, 2020), COVID-19 has been spreading out to over 180 countries and territories.
no code implementations • 19 May 2020 • Lu Yu, Shichao Pei, Chuxu Zhang, Shangsong Liang, Xiao Bai, Nitesh Chawla, Xiangliang Zhang
Pairwise ranking models have been widely used to address recommendation problems.
no code implementations • Conference 2020 • Jun Chen, Robert Hoehndorf, Mohamed Elhoseiny, Xiangliang Zhang
In natural language processing, relation extraction seeks to rationally understand unstructured text.
Ranked #17 on
Relation Extraction
on TACRED
no code implementations • 24 Dec 2019 • Jingzheng Tu, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Xiangliang Zhang
However, they all assume that workers' label quality is stable over time (always at the same level whenever they conduct the tasks).
no code implementations • 26 Nov 2019 • Shaowei Wei, Jun Wang, Guoxian Yu, Carlotta, Xiangliang Zhang
Multi-view clustering aims at integrating complementary information from multiple heterogeneous views to improve clustering results.
no code implementations • 17 Nov 2019 • Zhuo Yang, Yufei Han, Guoxian Yu, Qiang Yang, Xiangliang Zhang
We propose to formulate multi-label learning as a estimation of class distribution in a non-linear embedding space, where for each label, its positive data embeddings and negative data embeddings distribute compactly to form a positive component and negative component respectively, while the positive component and negative component are pushed away from each other.
no code implementations • 7 Nov 2019 • Jinzheng Tu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Xiangliang Zhang
AMCC accounts for the commonality and individuality of workers, and assumes that workers can be organized into different groups.
1 code implementation • 22 Oct 2019 • Uchenna Akujuobi, Han Yufei, Qiannan Zhang, Xiangliang Zhang
In this work, we study semi-supervised multi-label node classification problem in attributed graphs.
1 code implementation • 22 Oct 2019 • Uchenna Akujuobi, Qiannan Zhang, Han Yufei, Xiangliang Zhang
We propose to explore the neighborhood in a reinforcement learning setting and find a walk path well-tuned for classifying the unlabelled target nodes.
no code implementations • 25 Sep 2019 • Shupeng Gui, Xiangliang Zhang, Pan Zhong, Shuang Qiu, Mingrui Wu, Jieping Ye, Zhengdao Wang, Ji Liu
The key problem in graph node embedding lies in how to define the dependence to neighbors.
no code implementations • 19 Aug 2019 • Xuanwu Liu, Zhao Li, Jun Wang, Guoxian Yu, Carlotta Domeniconi, Xiangliang Zhang
It then defines an objective function to achieve deep feature learning compatible with the composite similarity preserving, category attribute space learning, and hashing coding function learning.
no code implementations • 29 May 2019 • Xuanwu Liu, Jun Wang, Guoxian Yu, Carlotta Domeniconi, Xiangliang Zhang
FlexCMH first introduces a clustering-based matching strategy to explore the local structure of each cluster, and thus to find the potential correspondence between clusters (and samples therein) across modalities.
no code implementations • 14 May 2019 • Xia Chen, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Zhao Li, Xiangliang Zhang
To maximize the profit of utilizing the rare and valuable supervised information in HNEs, we develop a novel Active Heterogeneous Network Embedding (ActiveHNE) framework, which includes two components: Discriminative Heterogeneous Network Embedding (DHNE) and Active Query in Heterogeneous Networks (AQHN).
no code implementations • 13 May 2019 • Shixing Yao, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Xiangliang Zhang
It then uses matrix factorization on the individual matrices, along with the shared matrix, to generate diverse clusterings of high-quality.
no code implementations • 8 May 2019 • Yufei Han, Xiangliang Zhang
In our work, we propose a collaborative and privacy-preserving machine teaching paradigm with multiple distributed teachers, to improve robustness of the federated training process against local data corruption.
no code implementations • 19 Apr 2019 • Khalil Elkhalil, Abla Kammoun, Xiangliang Zhang, Mohamed-Slim Alouini, Tareq Al-Naffouri
This paper carries out a large dimensional analysis of a variation of kernel ridge regression that we call \emph{centered kernel ridge regression} (CKRR), also known in the literature as kernel ridge regression with offset.
no code implementations • 27 Sep 2018 • Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu
Our method can 1) learn an arbitrary form of the representation function from the neighborhood, without losing any potential dependence structures, 2) automatically decide the significance of neighbors at different distances, and 3) be applicable to both homogeneous and heterogeneous graph embedding, which may contain multiple types of nodes.
no code implementations • 7 Jul 2018 • Lun Li, Jiqiang Liu, Lichen Cheng, Shuo Qiu, Wei Wang, Xiangliang Zhang, and Zonghua Zhang
The vehicular announcement network is one of the most promising utilities in the communications of smart vehicles and in the smart transportation systems.
no code implementations • 28 May 2018 • Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu
Graph embedding is a central problem in social network analysis and many other applications, aiming to learn the vector representation for each node.
no code implementations • 18 Oct 2017 • Guolei Sun, Xiangliang Zhang
In this paper, we proposed a novel and general framework of representation learning for graph with rich text information through constructing a bipartite heterogeneous network.
no code implementations • 25 Feb 2017 • Ke Sun, Xiangliang Zhang
Variational autoencoders (VAE) often use Gaussian or category distribution to model the inference process.
1 code implementation • ACM SIGKDD international conference on Knowledge discovery and data mining 2015 • Abdulhakim A. Qahtan, Basma Alharbi, Suojin Wang, Xiangliang Zhang
In this paper, we propose a framework for detecting changes in multidimensional data streams based on principal component analysis, which is used for projecting data into a lower dimensional space, thus facilitating density estimation and change-score calculations.