no code implementations • COLING 2022 • Jamell Dacon, Haochen Liu, Jiliang Tang
In this work, we conduct a pioneering study of the English variety use of African American English (AAE) in NLI task.
no code implementations • 31 Dec 2024 • Haoyu Han, Yu Wang, Harry Shomer, Kai Guo, Jiayuan Ding, Yongjia Lei, Mahantesh Halappanavar, Ryan A. Rossi, Subhabrata Mukherjee, Xianfeng Tang, Qi He, Zhigang Hua, Bo Long, Tong Zhao, Neil Shah, Amin Javari, Yinglong Xia, Jiliang Tang
However, unlike conventional RAG, where the retriever, generator, and external data sources can be uniformly designed in the neural-embedding space, the uniqueness of graph-structured data, such as diverse-formatted and domain-specific relational knowledge, poses unique and significant challenges when designing GraphRAG for different domains.
no code implementations • 2 Dec 2024 • Linxin Yang, Bingheng Li, Tian Ding, Jianghua Wu, Akang Wang, Yuyi Wang, Jiliang Tang, Ruoyu Sun, Xiaodong Luo
Unlike the standard learning-to-optimize framework that requires optimization solutions generated by solvers, our unsupervised method adjusts the network weights directly from the evaluation of the primal-dual gap.
no code implementations • 30 Nov 2024 • Jingzhe Liu, Haitao Mao, Zhikai Chen, Wenqi Fan, Mingxuan Ju, Tong Zhao, Neil Shah, Jiliang Tang
Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns, achieving success across different domains.
no code implementations • 21 Nov 2024 • Shenglai Zeng, Jiankun Zhang, Bingheng Li, Yuping Lin, Tianqi Zheng, Dante Everaert, Hanqing Lu, Hui Liu, Yue Xing, Monica Xiao Cheng, Jiliang Tang
We conduct a comprehensive analysis of LLM representation behaviors and demonstrate the significance of using representations in knowledge checking.
no code implementations • 12 Nov 2024 • Juanhui Li, Sreyashi Nag, Hui Liu, Xianfeng Tang, Sheikh Sarwar, Limeng Cui, Hansu Gu, Suhang Wang, Qi He, Jiliang Tang
However, the large size and high computation demands of LLMs limit their practicality in many applications, especially when further fine-tuning is required.
1 code implementation • 8 Nov 2024 • Dong Shu, Bingbing Duan, Kai Guo, Kaixiong Zhou, Jiliang Tang, Mengnan Du
In this study, we explore the alignment of multimodal representations between LLMs and Geometric Deep Models (GDMs) in the protein domain.
no code implementations • 21 Oct 2024 • Yingqian Cui, Pengfei He, Xianfeng Tang, Qi He, Chen Luo, Jiliang Tang, Yue Xing
Few-shot Chain-of-Thought (CoT) prompting has demonstrated strong performance in improving the reasoning capabilities of large language models (LLMs).
1 code implementation • 18 Oct 2024 • Pengfei He, Zitao Li, Yue Xing, Yaling Li, Jiliang Tang, Bolin Ding
In this paper, we address this limitation by introducing a novel structure-oriented analysis method to help LLMs better understand the question and guide the problem-solving process of LLMs.
no code implementations • 16 Oct 2024 • Jie Ren, Kangrui Chen, Chen Chen, Vikash Sehwag, Yue Xing, Jiliang Tang, Lingjuan Lyu
Existing methods, such as sample-level Membership Inference Attacks (MIA) and distribution-based dataset inference, distinguish member data (data used for training) and non-member data by leveraging the common observation that models tend to memorize and show greater confidence in member data.
no code implementations • 12 Oct 2024 • Pengfei He, Yingqian Cui, Han Xu, Hui Liu, Makoto Yamada, Jiliang Tang, Yue Xing
To better understand how ICL integrates the examples with the knowledge learned by the LLM during pre-training (i. e., pre-training knowledge) and how the examples impact ICL, this paper conducts a theoretical study in binary classification tasks.
no code implementations • 4 Oct 2024 • Xinnan Dai, Haohao Qu, Yifen Shen, Bohang Zhang, Qihao Wen, Wenqi Fan, Dongsheng Li, Jiliang Tang, Caihua Shan
The benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
no code implementations • 3 Oct 2024 • Yucheng Chu, Hang Li, Kaiqi Yang, Harry Shomer, Hui Liu, Yasemin Copur-Gencturk, Jiliang Tang
Open-ended short-answer questions (SAGs) have been widely recognized as a powerful tool for providing deeper insights into learners' responses in the context of learning analytics (LA).
no code implementations • 13 Sep 2024 • Hang Li, Wei Jin, Geri Skenderi, Harry Shomer, Wenzhuo Tang, Wenqi Fan, Jiliang Tang
In particular, we treat link prediction between a pair of nodes as a conditional likelihood estimation of its enclosing sub-graph.
no code implementations • 18 Aug 2024 • Xinnan Dai, Qihao Wen, Yifei Shen, Hongzhi Wen, Dongsheng Li, Jiliang Tang, Caihua Shan
In this work, we focus on the graph reasoning ability of LLMs.
no code implementations • 17 Aug 2024 • Jinhui Pang, Zixuan Wang, Jiliang Tang, Mingyan Xiao, Nan Yin
Following the observation, we align the category feature space of different domains in the spectral domain instead of aligning the whole features space, and we theoretical proof the stability of proposed \method{}.
no code implementations • 5 Aug 2024 • Chen Luo, Xianfeng Tang, Hanqing Lu, Yaochen Xie, Hui Liu, Zhenwei Dai, Limeng Cui, Ashutosh Joshi, Sreyashi Nag, Yang Li, Zhen Li, Rahul Goutam, Jiliang Tang, Haiyang Zhang, Qi He
Next, we delve into how the query understanding system contributes to understanding the performance of a ranking model.
no code implementations • 21 Jul 2024 • Guangliang Liu, Haitao Mao, Jiliang Tang, Kristen Marie Johnson
Through empirical investigation with tasks of language generation and multi-choice question answering, we conclude:(i) LLMs exhibit good performance across both tasks, and self-correction instructions are particularly beneficial when the correct answer is already top-ranked; (ii) The morality levels in intermediate hidden states are strong indicators as to whether one instruction would be more effective than another; (iii) Based on our analysis of intermediate hidden states and task case studies of self-correction behaviors, we are first to propose the hypothesis that intrinsic moral self-correction is in fact superficial.
no code implementations • 16 Jul 2024 • Kai Guo, Zewen Liu, Zhikai Chen, Hongzhi Wen, Wei Jin, Jiliang Tang, Yi Chang
To address this gap, our work aims to explore the potential of LLMs in the context of adversarial attacks on graphs.
1 code implementation • 21 Jun 2024 • Jie Ren, Kangrui Chen, Yingqian Cui, Shenglai Zeng, Hui Liu, Yue Xing, Jiliang Tang, Lingjuan Lyu
To address these gaps, we propose to benchmark the concept removal methods by introducing a new dataset, Six-CD, along with a novel evaluation metric.
1 code implementation • 19 Jun 2024 • Yu Song, Haitao Mao, Jiachen Xiao, Jingzhe Liu, Zhikai Chen, Wei Jin, Carl Yang, Jiliang Tang, Hui Liu
Pretraining plays a pivotal role in acquiring generalized knowledge from large-scale data, achieving remarkable successes as evidenced by large models in CV and NLP.
no code implementations • 19 Jun 2024 • Hang Li, Tianlong Xu, Jiliang Tang, Qingsong Wen
Knowledge tagging for questions plays a crucial role in contemporary intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.
1 code implementation • 16 Jun 2024 • Yuping Lin, Pengfei He, Han Xu, Yue Xing, Makoto Yamada, Hui Liu, Jiliang Tang
Large language models (LLMs) are susceptible to a type of attack known as jailbreaking, which misleads LLMs to output harmful contents.
1 code implementation • 15 Jun 2024 • Zhikai Chen, Haitao Mao, Jingzhe Liu, Yu Song, Bingheng Li, Wei Jin, Bahare Fatemi, Anton Tsitsulin, Bryan Perozzi, Hui Liu, Jiliang Tang
First, the absence of a comprehensive benchmark with unified problem settings hinders a clear understanding of the comparative effectiveness and practical value of different text-space GFMs.
1 code implementation • 14 Jun 2024 • Harry Shomer, Jay Revolinsky, Jiliang Tang
Knowledge Graph Completion (KGC) attempts to predict missing facts in a Knowledge Graph (KG).
no code implementations • 13 Jun 2024 • Jay Revolinsky, Harry Shomer, Jiliang Tang
To tackle the distribution shift problem, recent work focuses on creating datasets that feature distribution shifts and designing generalization methods that perform well on the new data.
no code implementations • 5 Jun 2024 • Haoyu Han, Juanhui Li, Wei Huang, Xianfeng Tang, Hanqing Lu, Chen Luo, Hui Liu, Jiliang Tang
Traditionally, GNNs employ a uniform global filter, typically a low-pass filter for homophilic graphs and a high-pass filter for heterophilic graphs.
1 code implementation • 4 Jun 2024 • Bingheng Li, Linxin Yang, Yupeng Chen, Senmiao Wang, Qian Chen, Haitao Mao, Yao Ma, Akang Wang, Tian Ding, Jiliang Tang, Ruoyu Sun
In this work, we propose an FOM-unrolled neural network (NN) called PDHG-Net, and propose a two-stage L2O method to solve large-scale LP problems.
1 code implementation • 4 Jun 2024 • Wenzhuo Tang, Haitao Mao, Danial Dervovic, Ivan Brugere, Saumitra Mishra, Yuying Xie, Jiliang Tang
To achieve effective data scaling, we aim to develop a general model that is able to capture diverse data patterns of graphs and can be utilized to adaptively help the downstream tasks.
no code implementations • 4 Jun 2024 • Guangliang Liu, Haitao Mao, Bochuan Cao, Zhiyu Xue, Xitong Zhang, Rongrong Wang, Jiliang Tang, Kristen Johnson
Our findings are verified in: (1) the scenario of multi-round question answering, by comprehensively demonstrating that intrinsic self-correction can progressively introduce performance gains through iterative interactions, ultimately converging to stable performance; and (2) the context of intrinsic self-correction for enhanced morality, in which we provide empirical evidence that iteratively applying instructions reduces model uncertainty towards convergence, which then leads to convergence of both the calibration error and self-correction performance, ultimately resulting in a stable state of intrinsic self-correction.
no code implementations • 26 Mar 2024 • Shen Wang, Tianlong Xu, Hang Li, Chaoli Zhang, Joleen Liang, Jiliang Tang, Philip S. Yu, Qingsong Wen
The advent of Large Language Models (LLMs) has brought in a new era of possibilities in the realm of education.
no code implementations • 26 Mar 2024 • Hang Li, Tianlong Xu, Jiliang Tang, Qingsong Wen
Knowledge concept tagging for questions plays a crucial role in contemporary intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.
no code implementations • 22 Mar 2024 • Kaiqi Yang, Yucheng Chu, Taylor Darwin, Ahreum Han, Hang Li, Hongzhi Wen, Yasemin Copur-Gencturk, Jiliang Tang, Hui Liu
Teachers' mathematical content knowledge (CK) is of vital importance and need in teacher professional development (PD) programs.
1 code implementation • 17 Mar 2024 • Jie Ren, Yaxin Li, Shenglai Zen, Han Xu, Lingjuan Lyu, Yue Xing, Jiliang Tang
Recent advancements in text-to-image diffusion models have demonstrated their remarkable capability to generate high-quality images from textual prompts.
1 code implementation • 23 Feb 2024 • Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang
In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.
1 code implementation • 14 Feb 2024 • Juanhui Li, Haoyu Han, Zhikai Chen, Harry Shomer, Wei Jin, Amin Javari, Jiliang Tang
To integrate text information, various methods have been introduced, mostly following a naive fusion framework.
no code implementations • 14 Feb 2024 • Hanbing Wang, Xiaorui Liu, Wenqi Fan, Xiangyu Zhao, Venkataramana Kini, Devendra Yadav, Fei Wang, Zhen Wen, Jiliang Tang, Hui Liu
This design stems from our empirical observation that beam search decoding is ultimately unnecessary for sequential recommendations.
no code implementations • 13 Feb 2024 • Kai Guo, Hongzhi Wen, Wei Jin, Yaming Guo, Jiliang Tang, Yi Chang
These insights have empowered us to develop a novel GNN backbone model, DGAT, designed to harness the robust properties of both graph self-attention mechanism and the decoupled architecture.
1 code implementation • 13 Feb 2024 • Li Ma, Haoyu Han, Juanhui Li, Harry Shomer, Hui Liu, Xiaofeng Gao, Jiliang Tang
Link prediction, which aims to forecast unseen connections in graphs, is a fundamental task in graph machine learning.
no code implementations • 7 Feb 2024 • Soo Yong Lee, Sunwoo Kim, Fanchen Bu, Jaemin Yoo, Jiliang Tang, Kijung Shin
Second, how does A-X dependence affect GNNs?
1 code implementation • 6 Feb 2024 • Ziwen Zhao, Yuhua Li, Yixiong Zou, Jiliang Tang, Ruixuan Li
Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution.
no code implementations • 4 Feb 2024 • Jie Ren, Han Xu, Pengfei He, Yingqian Cui, Shenglai Zeng, Jiankun Zhang, Hongzhi Wen, Jiayuan Ding, Pei Huang, Lingjuan Lyu, Hui Liu, Yi Chang, Jiliang Tang
We examine from two distinct viewpoints: the copyrights pertaining to the source data held by the data owners and those of the generative models maintained by the model builders.
1 code implementation • 3 Feb 2024 • Haitao Mao, Zhikai Chen, Wenzhuo Tang, Jianan Zhao, Yao Ma, Tong Zhao, Neil Shah, Mikhail Galkin, Jiliang Tang
Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain, aiming to develop graph models trained on extensive and diverse data to enhance their applicability across various tasks and domains.
no code implementations • 3 Feb 2024 • Haitao Mao, Guangliang Liu, Yao Ma, Rongrong Wang, Kristen Johnson, Jiliang Tang
In-Context Learning (ICL) empowers Large Language Models (LLMs) with the capacity to learn in context, achieving downstream generalization without gradient updates but with a few in-context examples.
1 code implementation • 3 Feb 2024 • Jingzhe Liu, Haitao Mao, Zhikai Chen, Tong Zhao, Neil Shah, Jiliang Tang
Yet, the neural scaling laws on graphs, i. e., how the performance of deep graph models changes with model and dataset sizes, have not been systematically investigated, casting doubts on the feasibility of achieving large graph models.
no code implementations • 2 Feb 2024 • Hang Li, Tianlong Xu, Chaoli Zhang, Eason Chen, Jing Liang, Xing Fan, Haoyang Li, Jiliang Tang, Qingsong Wen
The recent surge in generative AI technologies, such as large language models and diffusion models, has boosted the development of AI applications in various domains, including science, finance, and education.
no code implementations • 30 Jan 2024 • Yingqian Cui, Jie Ren, Pengfei He, Jiliang Tang, Yue Xing
We present a theoretical analysis of the performance of transformer with softmax attention in in-context learning with linear regression tasks.
1 code implementation • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
1 code implementation • 2 Nov 2023 • Harry Shomer, Yao Ma, Juanhui Li, Bo Wu, Charu C. Aggarwal, Jiliang Tang
A new class of methods have been proposed to tackle this problem by aggregating path information.
1 code implementation • 28 Oct 2023 • Xiangyu Zhao, Maolin Wang, Xinjian Zhao, Jiansheng Li, Shucheng Zhou, Dawei Yin, Qing Li, Jiliang Tang, Ruocheng Guo
This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques.
1 code implementation • 17 Oct 2023 • Harry Shomer, Yao Ma, Haitao Mao, Juanhui Li, Bo Wu, Jiliang Tang
These methods perform predictions by using the output of an MPNN in conjunction with a "pairwise encoding" that captures the relationship between nodes in the candidate link.
no code implementations • 10 Oct 2023 • Shenglai Zeng, Yaxin Li, Jie Ren, Yiding Liu, Han Xu, Pengfei He, Yue Xing, Shuaiqiang Wang, Jiliang Tang, Dawei Yin
In this work, we conduct the first comprehensive analysis to explore language models' (LMs) memorization during fine-tuning across tasks.
1 code implementation • 7 Oct 2023 • Zhikai Chen, Haitao Mao, Hongzhi Wen, Haoyu Han, Wei Jin, Haiyang Zhang, Hui Liu, Jiliang Tang
In light of these observations, this work introduces a label-free node classification on graphs with LLMs pipeline, LLM-GNN.
1 code implementation • 3 Oct 2023 • Yingqian Cui, Jie Ren, Yuping Lin, Han Xu, Pengfei He, Yue Xing, Lingjuan Lyu, Wenqi Fan, Hui Liu, Jiliang Tang
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
1 code implementation • 2 Oct 2023 • Han Xu, Jie Ren, Pengfei He, Shenglai Zeng, Yingqian Cui, Amy Liu, Hui Liu, Jiliang Tang
ChatGPT is one of the most popular language models which achieve amazing performance on various natural language tasks.
1 code implementation • 1 Oct 2023 • Haitao Mao, Juanhui Li, Harry Shomer, Bingheng Li, Wenqi Fan, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang
We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity.
1 code implementation • 27 Sep 2023 • Geri Skenderi, Hang Li, Jiliang Tang, Marco Cristani
They aim to learn an energy-based model by predicting the latent representation of a target signal y from the latent representation of a context signal x. JEPAs bypass the need for negative and positive samples, traditionally required by contrastive learning while avoiding the overfitting issues associated with generative pretraining.
Ranked #11 on Graph Classification on REDDIT-B
1 code implementation • NeurIPS 2023 • Wei Jin, Haitao Mao, Zheng Li, Haoming Jiang, Chen Luo, Hongzhi Wen, Haoyu Han, Hanqing Lu, Zhengyang Wang, Ruirui Li, Zhen Li, Monica Xiao Cheng, Rahul Goutam, Haiyang Zhang, Karthik Subbian, Suhang Wang, Yizhou Sun, Jiliang Tang, Bing Yin, Xianfeng Tang
To test the potential of the dataset, we introduce three tasks in this work: (1) next-product recommendation, (2) next-product recommendation with domain shifts, and (3) next-product title generation.
2 code implementations • 7 Jul 2023 • Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang
The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding.
no code implementations • 5 Jul 2023 • Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li
As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems.
1 code implementation • NeurIPS 2023 • Juanhui Li, Harry Shomer, Haitao Mao, Shenglai Zeng, Yao Ma, Neil Shah, Jiliang Tang, Dawei Yin
Furthermore, new and diverse datasets have also been created to better evaluate the effectiveness of these new models.
1 code implementation • 11 Jun 2023 • Jiatong Li, Yunqing Liu, Wenqi Fan, Xiao-Yong Wei, Hui Liu, Jiliang Tang, Qing Li
In this work, we propose a novel LLM-based framework (MolReGPT) for molecule-caption translation, where an In-Context Few-Shot Molecule Learning paradigm is introduced to empower molecule discovery with LLMs like ChatGPT to perform their in-context learning capability without domain-specific pre-training and fine-tuning.
Ranked #5 on Text-based de novo Molecule Generation on ChEBI-20
1 code implementation • NeurIPS 2023 • Haitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang
Recent studies on Graph Neural Networks(GNNs) provide both empirical and theoretical evidence supporting their effectiveness in capturing structural patterns on both homophilic and certain heterophilic graphs.
1 code implementation • 25 May 2023 • Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, Yue Xing, Jiliang Tang
By detecting the watermark from generated images, copyright infringement can be exposed with evidence.
1 code implementation • 1 Mar 2023 • Wenzhuo Tang, Hongzhi Wen, Renming Liu, Jiayuan Ding, Wei Jin, Yuying Xie, Hui Liu, Jiliang Tang
The recent development of multimodal single-cell technology has made the possibility of acquiring multiple omics data from individual cells, thereby enabling a deeper understanding of cellular states and dynamics.
1 code implementation • 10 Feb 2023 • Harry Shomer, Wei Jin, Wentao Wang, Jiliang Tang
It aims to predict unseen edges by learning representations for all the entities and relations in a KG.
2 code implementations • 8 Feb 2023 • Qidong Liu, Jiaxi Hu, Yutian Xiao, Xiangyu Zhao, Jingtong Gao, Wanyu Wang, Qing Li, Jiliang Tang
In this paper, we will give a comprehensive survey of the MRS models, mainly from technical views.
1 code implementation • 6 Feb 2023 • Chengyi Liu, Wenqi Fan, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li
Given the great success of diffusion models in image generation, increasing efforts have been made to leverage these techniques to advance graph generation in recent years.
6 code implementations • 22 Oct 2022 • Dylan Molho, Jiayuan Ding, Zhaoheng Li, Hongzhi Wen, Wenzhuo Tang, Yixin Wang, Julian Venegas, Wei Jin, Renming Liu, Runze Su, Patrick Danaher, Robert Yang, Yu Leo Lei, Yuying Xie, Jiliang Tang
Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages.
no code implementations • 19 Oct 2022 • Haitao Mao, Lixin Zou, Yujia Zheng, Jiliang Tang, Xiaokai Chu, Jiashu Zhao, Qian Wang, Dawei Yin
To address the above challenges, we propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model with causal discovery and mitigate the biases induced by multiple SERP features with no specific design.
1 code implementation • 18 Oct 2022 • Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang
The unlearnable strategies have been introduced to prevent third parties from training on the data without permission.
no code implementations • 18 Oct 2022 • Han Xu, Xiaorui Liu, Yuxuan Wan, Jiliang Tang
We demonstrate that the fairly trained classifiers can be greatly vulnerable to such poisoning attacks, with much worse accuracy & fairness trade-off, even when we apply some of the most effective defenses (originally proposed to defend traditional classification tasks).
no code implementations • 17 Oct 2022 • Han Xu, Pengfei He, Jie Ren, Yuxuan Wan, Zitao Liu, Hui Liu, Jiliang Tang
To tackle this problem, we propose Probabilistic Categorical Adversarial Attack (PCAA), which transfers the discrete optimization problem to a continuous problem that can be solved efficiently by Projected Gradient Descent.
no code implementations • 17 Oct 2022 • Yiqi Wang, Chaozhuo Li, Wei Jin, Rui Li, Jianan Zhao, Jiliang Tang, Xing Xie
To bridge such gap, in this work we introduce the first test-time training framework for GNNs to enhance the model generalization capacity for the graph classification task.
1 code implementation • 7 Oct 2022 • Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah
In this work, we provide a data-centric view to tackle these issues and propose a graph transformation framework named GTrans which adapts and refines graph data at test time to achieve better performance.
1 code implementation • 30 Aug 2022 • Harry Shomer, Wei Jin, Juanhui Li, Yao Ma, Jiliang Tang
It motivates us to design a framework that utilizes multiple aggregators to learn representations for hyper-relational facts: one from the perspective of the base triple and the other one from the perspective of the qualifiers.
1 code implementation • 7 Jul 2022 • Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye, Shuaiqiang Wang, Dawei Yin
The unbiased learning to rank (ULTR) problem has been greatly advanced by recent deep learning techniques and well-designed debias algorithms.
2 code implementations • 23 Jun 2022 • Zitao Liu, Qiongqiong Liu, Jiahao Chen, Shuyan Huang, Jiliang Tang, Weiqi Luo
However, the success behind deep learning based knowledge tracing (DLKT) approaches is still left somewhat unknown and proper measurement and analysis of these DLKT approaches remain a challenge.
3 code implementations • 15 Jun 2022 • Wei Jin, Xianfeng Tang, Haoming Jiang, Zheng Li, Danqing Zhang, Jiliang Tang, Bing Yin
However, existing approaches have their inherent limitations: (1) they are not directly applicable to graphs where the data is discrete; and (2) the condensation process is computationally expensive due to the involved nested optimization.
1 code implementation • 15 Jun 2022 • Wei Jin, Xiaorui Liu, Yao Ma, Charu Aggarwal, Jiliang Tang
In this paper, we propose a new perspective to look at the performance degradation of deep GNNs, i. e., feature overcorrelation.
1 code implementation • 15 Jun 2022 • Jamell Dacon, Harry Shomer, Shaylynn Crum-Dacon, Jiliang Tang
Online discussions, panels, talk page edits, etc., often contain harmful conversational content i. e., hate speech, death threats and offensive language, especially towards certain demographic groups.
no code implementations • 8 Jun 2022 • Haoyu Han, Xiaorui Liu, Haitao Mao, MohamadAli Torkamani, Feng Shi, Victor Lee, Jiliang Tang
Extensive experiments demonstrate that the proposed method can achieve comparable or better performance with state-of-the-art baselines while it has significantly better computation and memory efficiency.
no code implementations • 1 Jun 2022 • Yuxuan Wan, Han Xu, Xiaorui Liu, Jie Ren, Wenqi Fan, Jiliang Tang
However, federated learning is still under the risk of privacy leakage because of the existence of attackers who deliberately conduct gradient leakage attacks to reconstruct the client data.
1 code implementation • 21 May 2022 • Juanhui Li, Harry Shomer, Jiayuan Ding, Yiqi Wang, Yao Ma, Neil Shah, Jiliang Tang, Dawei Yin
This suggests a conflation of scoring function design, loss function design, and MP in prior work, with promising insights regarding the scalability of state-of-the-art KGC methods today, as well as careful attention to more suitable MP designs for KGC tasks tomorrow.
no code implementations • 2 May 2022 • Yaxin Li, Xiaorui Liu, Han Xu, Wentao Wang, Jiliang Tang
Deep Neural Network (DNN) are vulnerable to adversarial attacks.
no code implementations • 18 Apr 2022 • Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, Suhang Wang
Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society.
no code implementations • 3 Apr 2022 • Juanhui Li, Yao Ma, Wei Zeng, Suqi Cheng, Jiliang Tang, Shuaiqiang Wang, Dawei Yin
In other words, GE-BERT can capture both the semantic information and the users' search behavioral information of queries.
1 code implementation • 3 Mar 2022 • Hongzhi Wen, Jiayuan Ding, Wei Jin, Yiqi Wang, Yuying Xie, Jiliang Tang
Recent advances in multimodal single-cell technologies have enabled simultaneous acquisitions of multiple omics data from the same cell, providing deeper insights into cellular states and dynamics.
no code implementations • 14 Dec 2021 • Yiqi Wang, Chaozhuo Li, Zheng Liu, Mingzheng Li, Jiliang Tang, Xing Xie, Lei Chen, Philip S. Yu
Thus, graph pre-training has the great potential to alleviate data sparsity in GNN-based recommendations.
1 code implementation • NeurIPS 2021 • Xiaorui Liu, Jiayuan Ding, Wei Jin, Han Xu, Yao Ma, Zitao Liu, Jiliang Tang
Graph neural networks (GNNs) have shown the power in graph representation learning for numerous tasks.
2 code implementations • ICLR 2022 • Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, Neil Shah
Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns.
no code implementations • ACL 2022 • Haochen Liu, Joseph Thekinen, Sinem Mollaoglu, Da Tang, Ji Yang, Youlong Cheng, Hui Liu, Jiliang Tang
We conduct experiments on both synthetic and real-world datasets.
no code implementations • 29 Sep 2021 • Wei Jin, Xiaorui Liu, Yao Ma, Charu Aggarwal, Jiliang Tang
In this paper, we observe a new issue in deeper GNNs, i. e., feature overcorrelation, and perform a thorough study to deepen our understanding on this issue.
no code implementations • 20 Sep 2021 • Jamell Dacon, Jiliang Tang
Consequently, our findings highlight that social activism done by Black Lives Matter activists does not diverge from the social issues and topics involving police-brutality related and racially-motivated killings of Black individuals due to the shape of its topical graph that topics and conversations encircling the largest component directly relate to the topic of Black Lives Matter.
1 code implementation • 12 Aug 2021 • Wenqi Fan, Xiaorui Liu, Wei Jin, Xiangyu Zhao, Jiliang Tang, Qing Li
The key of recommender systems is to predict how likely users will interact with items based on their historical online behaviors, e. g., clicks, add-to-cart, purchases, etc.
no code implementations • 10 Aug 2021 • Yiqi Wang, Chaozhuo Li, Mingzheng Li, Wei Jin, Yuming Liu, Hao Sun, Xing Xie, Jiliang Tang
These methods often make recommendations based on the learned user and item embeddings.
no code implementations • 10 Aug 2021 • Yao Li, Xiaorui Liu, Jiliang Tang, Ming Yan, Kun Yuan
Decentralized optimization and communication compression have exhibited their great potential in accelerating distributed machine learning by mitigating the communication bottleneck in practice.
no code implementations • 7 Aug 2021 • Wenqi Fan, Wei Jin, Xiaorui Liu, Han Xu, Xianfeng Tang, Suhang Wang, Qing Li, Jiliang Tang, JianPing Wang, Charu Aggarwal
Despite the great success, recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
no code implementations • 28 Jul 2021 • Wentao Wang, Han Xu, Xiaorui Liu, Yaxin Li, Bhavani Thuraisingham, Jiliang Tang
Adversarial training has been empirically proven to be one of the most effective and reliable defense methods against adversarial attacks.
1 code implementation • 15 Jul 2021 • Qiongqiong Liu, Tianqiao Liu, Jiafu Zhao, Qiang Fang, Wenbiao Ding, Zhongqin Wu, Feng Xia, Jiliang Tang, Zitao Liu
Sentence completion (SC) questions present a sentence with one or more blanks that need to be filled in, three to five possible words or phrases as options.
1 code implementation • 15 Jul 2021 • Yang Hao, Hang Li, Wenbiao Ding, Zhongqin Wu, Jiliang Tang, Rose Luckin, Zitao Liu
In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits.
no code implementations • 12 Jul 2021 • Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil K. Jain, Jiliang Tang
In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society.
1 code implementation • 5 Jul 2021 • Xiaorui Liu, Wei Jin, Yao Ma, Yaxin Li, Hua Liu, Yiqi Wang, Ming Yan, Jiliang Tang
While many existing graph neural networks (GNNs) have been proven to perform $\ell_2$-based graph smoothing that enforces smoothness globally, in this work we aim to further enhance the local smoothness adaptivity of GNNs via $\ell_1$-based graph smoothing.
no code implementations • 12 Jun 2021 • Xiangyu Zhao, Haochen Liu, Wenqi Fan, Hui Liu, Jiliang Tang, Chong Wang
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
no code implementations • ICLR 2022 • Yao Ma, Xiaorui Liu, Neil Shah, Jiliang Tang
We find that this claim is not quite true, and in fact, GCNs can achieve strong performance on heterophilous graphs under certain conditions.
1 code implementation • ICLR 2022 • Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, Jiliang Tang
Then we propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
no code implementations • 9 Jun 2021 • Han Xu, Xiaorui Liu, Wentao Wang, Wenbiao Ding, Zhongqin Wu, Zitao Liu, Anil Jain, Jiliang Tang
In this work, we study the effect of memorization in adversarial trained DNNs and disclose two important findings: (a) Memorizing atypical samples is only effective to improve DNN's accuracy on clean atypical samples, but hardly improve their adversarial robustness and (b) Memorizing certain atypical samples will even hurt the DNN's performance on typical samples.
no code implementations • 10 May 2021 • Wei Jin, Xiaorui Liu, Yao Ma, Tyler Derr, Charu Aggarwal, Jiliang Tang
Graph neural networks (GNNs) have received tremendous attention due to their power in learning effective representations for graphs.
no code implementations • Findings (ACL) 2021 • Haochen Liu, Wei Jin, Hamid Karimi, Zitao Liu, Jiliang Tang
The results show that the text classification models trained under our proposed framework outperform traditional models significantly in terms of fairness, and also slightly in terms of classification performance.
1 code implementation • 19 Nov 2020 • Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, Jiliang Tang
Specifically, to balance information from graph structure and node features, we propose a feature similarity preserving aggregation which adaptively integrates graph structure and node features.
no code implementations • COLING 2020 • Haochen Liu, Zitao Liu, Zhongqin Wu, Jiliang Tang
The automatic evaluation for school assignments is an important application of AI in the education field.
2 code implementations • 13 Oct 2020 • Han Xu, Xiaorui Liu, Yaxin Li, Anil K. Jain, Jiliang Tang
However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data.
1 code implementation • 5 Oct 2020 • Yao Ma, Xiaorui Liu, Tong Zhao, Yozen Liu, Jiliang Tang, Neil Shah
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models including GCN, GAT, PPNP, and APPNP can be regarded as (approximately) solving a graph denoising problem with a smoothness assumption.
1 code implementation • EMNLP 2020 • Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zitao Liu, Jiliang Tang
Extensive experiments on two real-world conversation datasets show that our framework significantly reduces gender bias in dialogue models while maintaining the response quality.
1 code implementation • 23 Sep 2020 • Wentao Wang, Guowei Xu, Wenbiao Ding, Gale Yan Huang, Guoliang Li, Jiliang Tang, Zitao Liu
Extensive experiments conducted on three real-world data sets demonstrate the superiority of our framework on learning representations from limited data with crowdsourced labels, comparing with various state-of-the-art baselines.
no code implementations • 2 Sep 2020 • Han Xu, Ya-Xin Li, Xiaorui Liu, Hui Liu, Jiliang Tang
Thus, in this paper, we perform the initial study about adversarial attacks on meta learning under the few-shot classification problem.
1 code implementation • 31 Aug 2020 • Zhiwei Wang, Zhengzhang Chen, Jingchao Ni, Hui Liu, Haifeng Chen, Jiliang Tang
To address these challenges, in this paper, we propose OC4Seq, a multi-scale one-class recurrent neural network for detecting anomalies in discrete event sequences.
no code implementations • ICLR 2021 • Xiaorui Liu, Yao Li, Rongrong Wang, Jiliang Tang, Ming Yan
Communication compression has become a key strategy to speed up distributed optimization.
no code implementations • 28 Jun 2020 • Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, Suhang Wang
Pseudo labels increase the chance of connecting to labeled neighbors for low-degree nodes, thus reducing the biases of GCNs from the data perspective.
no code implementations • 26 Jun 2020 • Xiangyu Zhao, Haochen Liu, Hui Liu, Jiliang Tang, Weiwei Guo, Jun Shi, Sida Wang, Huiji Gao, Bo Long
Specifically, we first proposed an end-to-end differentiable framework that can calculate the weights over various dimensions for feature fields in a soft and continuous manner with an AutoML based optimization algorithm; then we derive a hard and discrete embedding component architecture according to the maximal weights and retrain the whole recommender framework.
1 code implementation • 17 Jun 2020 • Wei Jin, Tyler Derr, Haochen Liu, Yiqi Wang, Suhang Wang, Zitao Liu, Jiliang Tang
Thus, we seek to harness SSL for GNNs to fully exploit the unlabeled data.
1 code implementation • 3 Jun 2020 • Hao Yuan, Jiliang Tang, Xia Hu, Shuiwang Ji
Furthermore, our experimental results indicate that the generated graphs can provide guidance on how to improve the trained GNNs.
no code implementations • 27 May 2020 • Haochen Liu, Zhiwei Wang, Tyler Derr, Jiliang Tang
Recently, neural network based dialogue systems have become ubiquitous in our increasingly digitalized society.
no code implementations • 22 May 2020 • Yiqi Wang, Yao Ma, Wei Jin, Chaozhuo Li, Charu Aggarwal, Jiliang Tang
Therefore, in this paper, we aim to develop customized graph neural networks for graph classification.
3 code implementations • 20 May 2020 • Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, Jiliang Tang
A natural idea to defend adversarial attacks is to clean the perturbed graph.
1 code implementation • 17 May 2020 • Wenqi Fan, Tyler Derr, Xiangyu Zhao, Yao Ma, Hui Liu, Jian-Ping Wang, Jiliang Tang, Qing Li
In this work, we present our framework CopyAttack, which is a reinforcement learning based black-box attack method that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items.
no code implementations • 16 May 2020 • Gale Yan Huang, Jiahao Chen, Haochen Liu, Weiping Fu, Wenbiao Ding, Jiliang Tang, Songfan Yang, Guoliang Li, Zitao Liu
Asking questions is one of the most crucial pedagogical techniques used by teachers in class.
no code implementations • 15 May 2020 • Hang Li, Zhiwei Wang, Jiliang Tang, Wenbiao Ding, Zitao Liu
Classroom activity detection (CAD) aims at accurately recognizing speaker roles (either teacher or student) in classrooms.
3 code implementations • 13 May 2020 • Ya-Xin Li, Wei Jin, Han Xu, Jiliang Tang
DeepRobust is a PyTorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field.
3 code implementations • 2 Mar 2020 • Wei Jin, Ya-Xin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang
As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
no code implementations • 28 Feb 2020 • Xiangyu Zhao, Xudong Zheng, Xiwang Yang, Xiaobing Liu, Jiliang Tang
Online recommendation and advertising are two major income channels for online recommendation platforms (e. g. e-commerce and news feed site).
no code implementations • 26 Feb 2020 • Xiangyu Zhao, Chong Wang, Ming Chen, Xudong Zheng, Xiaobing Liu, Jiliang Tang
Deep learning based recommender systems (DLRSs) often have embedding layers, which are utilized to lessen the dimensionality of categorical variables (e. g. user/item identifiers) and meaningfully transform them in the low-dimensional space.
no code implementations • 4 Jan 2020 • Ghazaleh Beigi, Jiliang Tang, Huan Liu
The existence of negative links piques research interests in investigating whether properties and principles of signed networks differ from those of unsigned networks, and mandates dedicated efforts on link analysis for signed social networks.
no code implementations • 27 Dec 2019 • Teng Guo, Feng Xia, Shihao Zhen, Xiaomei Bai, Dongyu Zhang, Zitao Liu, Jiliang Tang
The failure of landing a job for college students could cause serious social consequences such as drunkenness and suicide.
1 code implementation • 24 Dec 2019 • Hamid Karimi, Tyler Derr, Jiliang Tang
In this regard, one crucial aspect of deep neural network classifiers that can help us deepen our knowledge about their decision-making behavior is to investigate their decision boundaries.
2 code implementations • 22 Nov 2019 • Zhiwei Wang, Hui Liu, Jiliang Tang, Songfan Yang, Gale Yan Huang, Zitao Liu
Robust language processing systems are becoming increasingly important given the recent awareness of dangerous situations where brittle machine learning models can be easily broken with the presence of noises.
no code implementations • 16 Oct 2019 • Xiaorui Liu, Yao Li, Jiliang Tang, Ming Yan
Large-scale machine learning models are often trained by parallel stochastic gradient descent algorithms.
1 code implementation • COLING 2020 • Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, Jiliang Tang
In particular, we construct a benchmark dataset and propose quantitative measures to understand fairness in dialogue models.
no code implementations • 23 Sep 2019 • Tiaoqiao Liu, Wenbiao Ding, Zhiwei Wang, Jiliang Tang, Gale Yan Huang, Zitao Liu
Automatic short answer grading (ASAG), which autonomously score student answers according to reference answers, provides a cost-effective and consistent approach to teaching professionals and can reduce their monotonous and tedious grading workloads.
3 code implementations • 17 Sep 2019 • Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil K. Jain
In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i. e., images, graphs and text.
no code implementations • 13 Sep 2019 • Haochen Liu, Tyler Derr, Zitao Liu, Jiliang Tang
Neural dialogue models have been widely adopted in various chatbot applications because of their good performance in simulating and generalizing human conversations.
no code implementations • 9 Sep 2019 • Xiangyu Zhao, Changsheng Gu, Haoshenglun Zhang, Xiwang Yang, Xiaobing Liu, Jiliang Tang, Hui Liu
However, most RL-based advertising algorithms focus on optimizing ads' revenue while ignoring the possible negative influence of ads on user experience of recommended items (products, articles and videos).
no code implementations • 1 Sep 2019 • Zhiwei Wang, Xiaoqin Feng, Jiliang Tang, Gale Yan Huang, Zitao Liu
Monitoring student knowledge states or skill acquisition levels known as knowledge tracing, is a fundamental part of intelligent tutoring systems.