no code implementations • 21 May 2024 • Yicheng Wang, Mark Cusick, Mohamed Laila, Kate Puech, Zhengping Ji, Xia Hu, Michael Wilson, Noah Spitzer-Williams, Bryan Wheeler, Yasser Ibrahim
Automatic speech recognition (ASR) techniques have become powerful tools, enhancing efficiency in law enforcement scenarios.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 29 Feb 2024 • Hongyi Liu, Zirui Liu, Ruixiang Tang, Jiayi Yuan, Shaochen Zhong, Yu-Neng Chuang, Li Li, Rui Chen, Xia Hu
Our aim is to raise awareness of the potential risks under the emerging share-and-play scenario, so as to proactively prevent potential consequences caused by LoRA-as-an-Attack.
no code implementations • 28 Feb 2024 • Yu-Neng Chuang, Tianwei Xing, Chia-Yuan Chang, Zirui Liu, Xun Chen, Xia Hu
In this work, we propose a Natural Language Prompt Encapsulation (Nano-Capsulator) framework compressing original prompts into NL formatted Capsule Prompt while maintaining the prompt utility and transferability.
no code implementations • 7 Feb 2024 • Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Ruixiang Tang, Fan Yang, Mengnan Du, Xuanting Cai, Xia Hu
In this work, we introduce a generative explanation framework, xLLM, to improve the faithfulness of the explanations provided in natural language formats for LLMs.
1 code implementation • 5 Feb 2024 • Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, Xia Hu
This memory demand increases with larger batch sizes and longer context lengths.
no code implementations • 3 Feb 2024 • Aokun Chen, Qian Li, Yu Huang, Yongqiu Li, Yu-Neng Chuang, Xia Hu, Serena Guo, Yonghui Wu, Yi Guo, Jiang Bian
We constructed an interactive knowledge map to disseminate our study results.
no code implementations • 8 Jan 2024 • Zirui Liu, Qingquan Song, Qiang Charles Xiao, Sathiya Keerthi Selvaraj, Rahul Mazumder, Aman Gupta, Xia Hu
This usually results in a trade-off between model accuracy and efficiency.
2 code implementations • 2 Jan 2024 • Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention.
no code implementations • 29 Dec 2023 • Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, Xia Hu
Here we propose a non-contrastive learning objective, named nCL, which explicitly mitigates dimensional collapse of representations in collaborative filtering.
no code implementations • 23 Dec 2023 • Guanchu Wang, Yu-Neng Chuang, Fan Yang, Mengnan Du, Chia-Yuan Chang, Shaochen Zhong, Zirui Liu, Zhaozhuo Xu, Kaixiong Zhou, Xuanting Cai, Xia Hu
To address this problem, we develop a pre-trained, DNN-based, generic explainer on large-scale image datasets, and leverage its transferability to explain various vision models for downstream tasks.
1 code implementation • 19 Dec 2023 • Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali Mostafavi, Xia Hu
To this end, we aim to achieve fairness via a new GNN architecture.
no code implementations • 1 Dec 2023 • Jialin Wu, Xia Hu, Yaqing Wang, Bo Pang, Radu Soricut
Large multi-modal models (LMMs) exhibit remarkable performance across numerous tasks.
Ranked #1 on Visual Question Answering (VQA) on A-OKVQA (using extra training data)
1 code implementation • 19 Nov 2023 • Yuting Sun, Guansong Pang, Guanhua Ye, Tong Chen, Xia Hu, Hongzhi Yin
The ongoing challenges in time series anomaly detection (TSAD), notably the scarcity of anomaly labels and the variability in anomaly lengths and shapes, have led to the need for a more efficient solution.
no code implementations • 20 Oct 2023 • Ruixiang Tang, Gord Lueck, Rodolfo Quispe, Huseyin A Inan, Janardhan Kulkarni, Xia Hu
Large language models have revolutionized the field of NLP by achieving state-of-the-art performance on various tasks.
no code implementations • 1 Oct 2023 • Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu
Our method progressively increases the training length throughout the pretraining phase, thereby mitigating computational costs and enhancing efficiency.
no code implementations • 29 Sep 2023 • Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu
We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup.
1 code implementation • 20 Sep 2023 • Xin Zheng, Yixin Liu, Zhifeng Bao, Meng Fang, Xia Hu, Alan Wee-Chung Liew, Shirui Pan
Data-centric AI, with its primary focus on the collection, management, and utilization of data to drive AI models and applications, has attracted increasing attention in recent years.
1 code implementation • 4 Sep 2023 • Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Kwei-Herng Lai, Daochen Zha, Ruixiang Tang, Fan Yang, Alfredo Costilla Reyes, Kaixiong Zhou, Xiaoqian Jiang, Xia Hu
The exponential growth in scholarly publications necessitates advanced tools for efficient article retrieval, especially in interdisciplinary fields where diverse terminologies are used to describe similar research.
no code implementations • 2 Sep 2023 • Huiyuan Chen, Kaixiong Zhou, Kwei-Herng Lai, Chin-Chia Michael Yeh, Yan Zheng, Xia Hu, Hao Yang
To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability.
no code implementations • 28 Aug 2023 • Kwei-Herng Lai, Daochen Zha, Huiyuan Chen, Mangesh Bendre, Yuzhong Chen, Mahashweta Das, Hao Yang, Xia Hu
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
no code implementations • 4 Aug 2023 • Qizhang Feng, Jiayi Yuan, Forhan Bin Emdad, Karim Hanna, Xia Hu, Zhe He
Stroke is a significant cause of mortality and morbidity, necessitating early predictive strategies to minimize risks.
1 code implementation • 22 Jul 2023 • Qiaoyu Tan, Xin Zhang, Xiao Huang, Hao Chen, Jundong Li, Xia Hu
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
no code implementations • 9 Jul 2023 • Chia-Yuan Chang, Yu-Neng Chuang, Kwei-Herng Lai, Xiaotian Han, Xia Hu, Na Zou
Those studies face challenges, either in inaccurate predictions of sensitive attributes or the need to mitigate unequal distribution of manually defined non-sensitive attributes related to bias.
1 code implementation • 15 Jun 2023 • Xiaotian Han, Jianfeng Chi, Yu Chen, Qifan Wang, Han Zhao, Na Zou, Xia Hu
This paper introduces the Fair Fairness Benchmark (\textsf{FFB}), a benchmarking framework for in-processing group fairness methods.
no code implementations • 9 Jun 2023 • Yao Rong, Guanchu Wang, Qizhang Feng, Ninghao Liu, Zirui Liu, Enkelejda Kasneci, Xia Hu
A strategy of subgraph sampling is designed in LARA to improve the scalability of the training process.
1 code implementation • NeurIPS 2023 • Zirui Liu, Guanchu Wang, Shaochen Zhong, Zhaozhuo Xu, Daochen Zha, Ruixiang Tang, Zhimeng Jiang, Kaixiong Zhou, Vipin Chaudhary, Shuai Xu, Xia Hu
While the model parameters do contribute to memory usage, the primary memory bottleneck during training arises from storing feature maps, also known as activations, as they are crucial for gradient calculation.
no code implementations • 24 May 2023 • Zirui Liu, Zhimeng Jiang, Shaochen Zhong, Kaixiong Zhou, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
However, model editing for graph neural networks (GNNs) is rarely explored, despite GNNs' widespread applicability.
1 code implementation • ICLR 2022 • Qizhang Feng, Ninghao Liu, Fan Yang, Ruixiang Tang, Mengnan Du, Xia Hu
Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data.
no code implementations • 21 May 2023 • Yue Xu, Hao Chen, Zefan Wang, Jianwen Yin, Qijie Shen, Dimin Wang, Feiran Huang, Lixiang Lai, Tao Zhuang, Junfeng Ge, Xia Hu
Feed recommendation systems, which recommend a sequence of items for users to browse and interact with, have gained significant popularity in practical applications.
no code implementations • 17 May 2023 • Zhaozhuo Xu, Zirui Liu, Beidi Chen, Yuxin Tang, Jue Wang, Kaixiong Zhou, Xia Hu, Anshumali Shrivastava
Thus, optimizing this accuracy-efficiency trade-off is crucial for the LLM deployment on commodity hardware.
1 code implementation • 3 May 2023 • Daochen Zha, Louis Feng, Liang Luo, Bhargav Bhushanam, Zirui Liu, Yusuo Hu, Jade Nie, Yuzhen Huang, Yuandong Tian, Arun Kejariwal, Xia Hu
In this work, we explore a "pre-train, and search" paradigm for efficient sharding.
1 code implementation • 26 Apr 2023 • Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, Xia Hu
This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks.
no code implementations • 21 Apr 2023 • Guanchu Wang, Ninghao Liu, Daochen Zha, Xia Hu
Anomaly detection, where data instances are discovered containing feature patterns different from the majority, plays a fundamental role in various applications.
no code implementations • 15 Apr 2023 • Kwei-Herng Lai, Lan Wang, Huiyuan Chen, Kaixiong Zhou, Fei Wang, Hao Yang, Xia Hu
We formulate context sampling into the Markov decision process and exploit deep reinforcement learning to optimize the time series domain adaptation process via context sampling and design a tailored reward function to generate domain-invariant features that better align two domains for anomaly detection.
no code implementations • 30 Mar 2023 • Sirui Ding, Qiaoyu Tan, Chia-Yuan Chang, Na Zou, Kai Zhang, Nathan R. Hoot, Xiaoqian Jiang, Xia Hu
Organ transplant is the essential treatment method for some end-stage diseases, such as liver failure.
no code implementations • 24 Mar 2023 • Jiayi Yuan, Ruixiang Tang, Xiaoqian Jiang, Xia Hu
The process of matching patients with suitable clinical trials is essential for advancing medical research and providing optimal care.
no code implementations • 24 Mar 2023 • Chia-Yuan Chang, Jiayi Yuan, Sirui Ding, Qiaoyu Tan, Kai Zhang, Xiaoqian Jiang, Xia Hu, Na Zou
To tackle these challenges, deep learning frameworks have been created to match patients to trials.
no code implementations • 23 Mar 2023 • Yu-Neng Chuang, Ruixiang Tang, Xiaoqian Jiang, Xia Hu
Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes.
1 code implementation • 20 Mar 2023 • Ruixiang Tang, Qizhang Feng, Ninghao Liu, Fan Yang, Xia Hu
To overcome this challenge, we introduce a clean-label backdoor watermarking framework that uses imperceptible perturbations to replace mislabeled samples.
no code implementations • 19 Mar 2023 • Shenghan Zhang, Haoxuan Li, Ruixiang Tang, Sirui Ding, Laila Rasmy, Degui Zhi, Na Zou, Xia Hu
In this work, we present PheME, an Ensemble framework using Multi-modality data of structured EHRs and unstructured clinical notes for accurate Phenotype prediction.
10 code implementations • 17 Mar 2023 • Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, Xia Hu
Artificial Intelligence (AI) is making a profound impact in almost every domain.
no code implementations • 8 Mar 2023 • Ruixiang Tang, Xiaotian Han, Xiaoqian Jiang, Xia Hu
Our method has resulted in significant improvements in the performance of downstream tasks, improving the F1-score from 23. 37% to 63. 99% for the named entity recognition task and from 75. 86% to 83. 59% for the relation extraction task.
1 code implementation • NeurIPS 2023 • Zhimeng Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR) by considering the worst case within the model weight perturbation ball for each sensitive attribute group.
1 code implementation • 5 Mar 2023 • Yu-Neng Chuang, Guanchu Wang, Fan Yang, Quan Zhou, Pushkar Tripathi, Xuanting Cai, Xia Hu
In this work, we propose a COntrastive Real-Time eXplanation (CoRTX) framework to learn the explanation-oriented representation and relieve the intensive dependence of explainer training on explanation labels.
no code implementations • 28 Feb 2023 • Diego Martinez, Daochen Zha, Qiaoyu Tan, Xia Hu
However, the existing systems often have a very small search space for feature preprocessing with the same preprocessing pipeline applied to all the numerical features.
no code implementations • 18 Feb 2023 • Sirui Ding, Ruixiang Tang, Daochen Zha, Na Zou, Kai Zhang, Xiaoqian Jiang, Xia Hu
To tackle this problem, this work proposes a fair machine learning framework targeting graft failure prediction in liver transplant.
no code implementations • 7 Feb 2023 • Yu-Neng Chuang, Guanchu Wang, Fan Yang, Zirui Liu, Xuanting Cai, Mengnan Du, Xia Hu
Finally, we summarize the challenges of deploying XAI acceleration methods to real-world scenarios, overcoming the trade-off between faithfulness and efficiency, and the selection of different acceleration methods.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 4 Feb 2023 • Ruixiang Tang, Yu-Neng Chuang, Xia Hu
The emergence of large language models (LLMs) has resulted in the production of LLM-generated texts that is highly sophisticated and almost indistinguishable from texts written by humans.
1 code implementation • 31 Jan 2023 • Xiaotian Han, Zhimeng Jiang, Hongye Jin, Zirui Liu, Na Zou, Qifan Wang, Xia Hu
Unfortunately, in this paper, we reveal that the fairness metric $\Delta DP$ can not precisely measure the violation of demographic parity, because it inherently has the following drawbacks: i) zero-value $\Delta DP$ does not guarantee zero violation of demographic parity, ii) $\Delta DP$ values can vary with different classification thresholds.
1 code implementation • 12 Jan 2023 • Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Xia Hu
The role of data in building AI systems has recently been significantly magnified by the emerging concept of data-centric AI (DCAI), which advocates a fundamental shift from model advancements to ensuring data quality and reliability.
1 code implementation • 23 Dec 2022 • Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP.
no code implementations • 20 Dec 2022 • Cameron Diao, Kaixiong Zhou, Zirui Liu, Xiao Huang, Xia Hu
Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs.
no code implementations • 8 Dec 2022 • Huiyuan Chen, Xiaoting Li, Kaixiong Zhou, Xia Hu, Chin-Chia Michael Yeh, Yan Zheng, Hao Yang
We found that our TinyKG with INT2 quantization aggressively reduces the memory footprint of activation maps with $7 \times$, only with $2\%$ loss in accuracy, allowing us to deploy KGNNs on memory-constrained devices.
1 code implementation • 6 Dec 2022 • Zhimeng Jiang, Kaixiong Zhou, Mi Zhang, Rui Chen, Xia Hu, Soo-Hyun Choi
In this work, we explicitly factor in the uncertainty of estimated ad impression values and model the risk preference of a DSP under a specific state and market environment via a sequential decision process.
no code implementations • 26 Nov 2022 • Yu-Neng Chuang, Kwei-Herng Lai, Ruixiang Tang, Mengnan Du, Chia-Yuan Chang, Na Zou, Xia Hu
Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning.
no code implementations • 9 Nov 2022 • Kaixiong Zhou, Zhenyu Zhang, Shengyuan Chen, Tianlong Chen, Xiao Huang, Zhangyang Wang, Xia Hu
Quantum neural networks (QNNs), an interdisciplinary field of quantum computing and machine learning, have attracted tremendous research interests due to the specific quantum advantages.
no code implementations • 19 Oct 2022 • Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu
To this end, we propose Randomized Sparse Computation, which for the first time demonstrate the potential of training GNNs with approximated operations.
2 code implementations • 14 Oct 2022 • Keyu Duan, Zirui Liu, Peihao Wang, Wenqing Zheng, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs).
Ranked #2 on Node Property Prediction on ogbn-products
1 code implementation • 5 Oct 2022 • Daochen Zha, Louis Feng, Qiaoyu Tan, Zirui Liu, Kwei-Herng Lai, Bhargav Bhushanam, Yuandong Tian, Arun Kejariwal, Xia Hu
Although prior work has explored learning-based approaches for the device placement of computational graphs, embedding table placement remains to be a challenging problem because of 1) the operation fusion of embedding tables, and 2) the generalizability requirement on unseen placement tasks with different numbers of tables and/or devices.
2 code implementations • 30 Sep 2022 • Xiaotian Han, Tong Zhao, Yozen Liu, Xia Hu, Neil Shah
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
2 code implementations • 26 Aug 2022 • Daochen Zha, Kwei-Herng Lai, Qiaoyu Tan, Sirui Ding, Na Zou, Xia Hu
Motivated by this, we investigate developing a learning-based over-sampling algorithm to optimize the classification performance, which is a challenging task because of the huge and hierarchical decision space.
Hierarchical Reinforcement Learning reinforcement-learning +1
no code implementations • 25 Aug 2022 • Mengnan Du, Fengxiang He, Na Zou, DaCheng Tao, Xia Hu
We first introduce the concepts of shortcut learning of language models.
1 code implementation • 12 Aug 2022 • Daochen Zha, Louis Feng, Bhargav Bhushanam, Dhruv Choudhary, Jade Nie, Yuandong Tian, Jay Chae, Yinbin Ma, Arun Kejariwal, Xia Hu
This is a significant design challenge of distributed systems named embedding table sharding, i. e., how we should partition the embedding tables to balance the costs across devices, which is a non-trivial task because 1) it is hard to efficiently and precisely measure the cost, and 2) the partition problem is known to be NP-hard.
1 code implementation • 5 Aug 2022 • Guanchu Wang, Zirui Liu, Zhimeng Jiang, Ninghao Liu, Na Zou, Xia Hu
Activation compressed training provides a solution towards reducing the memory cost of training deep neural networks~(DNNs).
no code implementations • 4 Aug 2022 • Fan Yang, Qizhang Feng, Kaixiong Zhou, Jiahao Chen, Xia Hu
Counterfactual, serving as one emerging type of model explanation, has attracted tons of attentions recently from both industry and academia.
1 code implementation • 20 Jul 2022 • Guanchu Wang, Mengnan Du, Ninghao Liu, Na Zou, Xia Hu
Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information.
no code implementations • 29 Jun 2022 • Qizhang Feng, Mengnan Du, Na Zou, Xia Hu
The digitization of healthcare data coupled with advances in computational capabilities has propelled the adoption of machine learning (ML) in healthcare.
1 code implementation • 17 Jun 2022 • Guanchu Wang, Yu-Neng Chuang, Mengnan Du, Fan Yang, Quan Zhou, Pushkar Tripathi, Xuanting Cai, Xia Hu
Even though Shapley value provides an effective explanation for a DNN model prediction, the computation relies on the enumeration of all possible input feature coalitions, which leads to the exponentially growing complexity.
no code implementations • 27 May 2022 • Yicheng Wang, Xiaotian Han, Chia-Yuan Chang, Daochen Zha, Ulisses Braga-Neto, Xia Hu
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
1 code implementation • 15 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i. e., graphon) of different classes of graphs.
3 code implementations • 14 Feb 2022 • Guanchu Wang, Zaid Pervaiz Bhat, Zhimeng Jiang, Yi-Wei Chen, Daochen Zha, Alfredo Costilla Reyes, Afshin Niktash, Gorkem Ulkar, Erman Okman, Xuanting Cai, Xia Hu
DNNs have been an effective tool for data processing and analysis.
no code implementations • 13 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
no code implementations • 8 Feb 2022 • Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali Mostafavi, Xia Hu
Despite recent advances in achieving fair representations and predictions through regularization, adversarial debiasing, and contrastive learning in graph neural networks (GNNs), the working mechanism (i. e., message passing) behind GNNs inducing unfairness issue remains unknown.
no code implementations • 21 Jan 2022 • Ying-Xin Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng, Xiangnan He, Tat-Seng Chua
In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
1 code implementation • 7 Jan 2022 • Qiaoyu Tan, Ninghao Liu, Xiao Huang, Rui Chen, Soo-Hyun Choi, Xia Hu
We introduce a novel masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
1 code implementation • 5 Jan 2022 • Daochen Zha, Kwei-Herng Lai, Kaixiong Zhou, Xia Hu
Prior work has approached TSC from two major directions: (1) similarity-based methods that classify time-series based on the nearest neighbors, and (2) deep learning models that directly learn the representations for classification in a data-driven manner.
no code implementations • 8 Nov 2021 • Ruixiang Tang, Ninghao Liu, Fan Yang, Na Zou, Xia Hu
Explainable machine learning attracts increasing attention as it improves transparency of models, which is helpful for machine learning to be trusted in real applications.
no code implementations • 28 Oct 2021 • Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang
In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i. e., the loss changes with respect to model weights and node features, respectively.
no code implementations • 16 Oct 2021 • Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, Ahmed Hassan Awadallah
Recent work has focused on compressing pre-trained language models (PLMs) like BERT where the major focus has been to improve the in-distribution performance for downstream tasks.
no code implementations • 29 Sep 2021 • Ruixiang Tang, Hongye Jin, Curtis Wigington, Mengnan Du, Rajiv Jain, Xia Hu
The main idea is to insert a watermark which is only known to defender into the protected model and the watermark will then be transferred into all stolen models.
no code implementations • ICLR 2022 • Zhimeng Jiang, Kaixiong Zhou, Zirui Liu, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
Instance-dependent label noise (IDN) widely exists in real-world datasets and usually misleads the training of deep neural networks.
no code implementations • 29 Sep 2021 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i. e., graphon) of different classes of graphs.
no code implementations • 29 Sep 2021 • Duc N.M Hoang, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang
Despite the preliminary success, we argue that for GNNs, NAS has to be customized further, due to the topological complicacy of GNN input data (graph) as well as the notorious training instability.
no code implementations • ICLR 2022 • Zirui Liu, Kaixiong Zhou, Fan Yang, Li Li, Rui Chen, Xia Hu
Based on the implementation, we propose a memory-efficient framework called ``EXACT'', which for the first time demonstrate the potential and evaluate the feasibility of training GNNs with compressed activations.
1 code implementation • ICLR 2022 • Zhimeng Jiang, Xiaotian Han, Chao Fan, Fan Yang, Ali Mostafavi, Xia Hu
We show the understanding of GDP from the probability perspective and theoretically reveal the connection between GDP regularizer and adversarial debiasing.
1 code implementation • 23 Sep 2021 • Kai Guo, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, Xin Wang
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
no code implementations • 30 Aug 2021 • Kaixiong Zhou, Ninghao Liu, Fan Yang, Zirui Liu, Rui Chen, Li Li, Soo-Hyun Choi, Xia Hu
Graph neural networks (GNNs), which learn the node representations by recursively aggregating information from its neighbors, have become a predominant computational tool in many domains.
1 code implementation • 24 Aug 2021 • Tianlong Chen, Kaixiong Zhou, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang
In view of those, we present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
1 code implementation • 9 Aug 2021 • Daochen Zha, Zaid Pervaiz Bhat, Yi-Wei Chen, Yicheng Wang, Sirui Ding, Jiaben Chen, Kwei-Herng Lai, Mohammad Qazim Bhat, Anmoll Kumar Jain, Alfredo Costilla Reyes, Na Zou, Xia Hu
Action recognition is an important task for video understanding with broad applications.
1 code implementation • NeurIPS 2021 • Kaixiong Zhou, Xiao Huang, Daochen Zha, Rui Chen, Li Li, Soo-Hyun Choi, Xia Hu
To this end, we analyze the bottleneck of deep GNNs by leveraging the Dirichlet energy of node embeddings, and propose a generalizable principle to guide the training of deep GNNs.
no code implementations • 29 Jun 2021 • Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu
Our experiments demonstrate a significant improvement in the realism of the generated graph structures, typically by 1-2 orders of magnitude of graph structure metrics, compared to leading graph VAEand GAN models.
no code implementations • NeurIPS 2021 • Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Hassan Awadallah, Xia Hu
This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder.
no code implementations • 16 Jun 2021 • Fan Yang, Sahan Suresh Alva, Jiahao Chen, Xia Hu
To address these limitations, we propose a Model-based Counterfactual Synthesizer (MCS) framework for interpreting machine learning models.
1 code implementation • 11 Jun 2021 • Daochen Zha, Jingru Xie, Wenye Ma, Sheng Zhang, Xiangru Lian, Xia Hu, Ji Liu
Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents.
1 code implementation • 10 Jun 2021 • Daochen Zha, Kwei-Herng Lai, Kaixiong Zhou, Xia Hu
Supervised regression to demonstrations has been demonstrated to be a stable way to train deep policy networks.
no code implementations • 28 May 2021 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Xia Hu
However, the attribution problem has not been well-defined, which lacks a unified guideline to the contribution assignment process.
no code implementations • 17 May 2021 • Yuening Li, Zhengzhang Chen, Daochen Zha, Mengnan Du, Denghui Zhang, Haifeng Chen, Xia Hu
Motivated by the success of disentangled representation learning in computer vision, we study the possibility of learning semantic-rich time-series representations, which remains unexplored due to three main challenges: 1) sequential data structure introduces complex temporal correlations and makes the latent representations hard to interpret, 2) sequential models suffer from KL vanishing problem, and 3) interpretable semantic concepts for time-series often rely on multiple factors instead of individuals.
no code implementations • 14 Apr 2021 • Huiqi Deng, Na Zou, Weifu Chen, Guocan Feng, Mengnan Du, Xia Hu
The basic idea is to learn a source signal by back-propagation such that the mutual information between input and output should be as much as possible preserved in the mutual information between input and the source signal.
1 code implementation • ICCV 2021 • Zirui Liu, Haifeng Jin, Ting-Hsiang Wang, Kaixiong Zhou, Xia Hu
We validate in experiments that the relative gain from automated data augmentation in test accuracy is highly correlated to Variance Diversity.
no code implementations • NAACL 2021 • Mengnan Du, Varun Manjunatha, Rajiv Jain, Ruchi Deshpande, Franck Dernoncourt, Jiuxiang Gu, Tong Sun, Xia Hu
These two observations are further employed to formulate a measurement which can quantify the shortcut degree of each training sample.
no code implementations • 8 Mar 2021 • Xia Hu, Lingyang Chu, Jian Pei, Weiqing Liu, Jiang Bian
Model complexity is a fundamental problem in deep learning.
1 code implementation • 18 Feb 2021 • Qiaoyu Tan, Jianwei Zhang, Jiangchao Yao, Ninghao Liu, Jingren Zhou, Hongxia Yang, Xia Hu
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool and output multiple embeddings accordingly.
1 code implementation • 18 Feb 2021 • Qiaoyu Tan, Jianwei Zhang, Ninghao Liu, Xiao Huang, Hongxia Yang, Jingren Zhou, Xia Hu
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
3 code implementations • ICLR 2021 • Daochen Zha, Wenye Ma, Lei Yuan, Xia Hu, Ji Liu
Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once.
no code implementations • 18 Jan 2021 • Fan Yang, Ninghao Liu, Mengnan Du, Xia Hu
With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high-stake scenarios.
no code implementations • 1 Jan 2021 • Yi-Wei Chen, Qingquan Song, Xia Hu
Differentiable NAS with supernets that encompass all potential architectures in a large graph cuts down search overhead to few GPU days or less.
no code implementations • NeurIPS 2020 • Zirui Liu, Qingquan Song, Kaixiong Zhou, Ting-Hsiang Wang, Ying Shan, Xia Hu
Motivated by the observation, in this paper, we propose to investigate the interaction detection problem from a novel topological perspective by analyzing the connectivity in neural networks.
no code implementations • 17 Nov 2020 • Ruixiang Tang, Mengnan Du, Xia Hu
In this paper, we present DSN (Deep Serial Number), a simple yet effective watermarking algorithm designed specifically for deep neural networks (DNNs).
no code implementations • 29 Oct 2020 • Imtiaz Ahmed, Travis Galoppo, Xia Hu, Yu Ding
In order to make dimensionality reduction effective for high-dimensional data embedding nonlinear low-dimensional manifold, it is understood that some sort of geodesic distance metric should be used to discriminate the data samples.
no code implementations • 25 Oct 2020 • Zirui Liu, Qingquan Song, Kaixiong Zhou, Ting Hsiang Wang, Ying Shan, Xia Hu
Detecting statistical interactions between input features is a crucial and challenging task.
1 code implementation • 18 Sep 2020 • Kwei-Herng Lai, Daochen Zha, Guanchu Wang, Junjie Xu, Yue Zhao, Devesh Kumar, Yile Chen, Purav Zumkhawaka, Minyang Wan, Diego Martinez, Xia Hu
We present TODS, an automated Time Series Outlier Detection System for research and industrial applications.
1 code implementation • 16 Sep 2020 • Daochen Zha, Kwei-Herng Lai, Mingyang Wan, Xia Hu
Specifically, existing strategies have been focused on making the top instances more likely to be anomalous based on the feedback.
no code implementations • 16 Sep 2020 • Ninghao Liu, Yunsong Meng, Xia Hu, Tie Wang, Bo Long
Recent years have witnessed an increasing number of interpretation methods being developed for improving transparency of NLP models.
no code implementations • 21 Aug 2020 • Ninghao Liu, Yong Ge, Li Li, Xia Hu, Rui Chen, Soo-Hyun Choi
Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge.
no code implementations • 21 Aug 2020 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Xia Hu
Attribution methods have been developed to understand the decision-making process of machine learning models, especially deep neural networks, by assigning importance scores to individual features.
no code implementations • 24 Jul 2020 • Sina Mohseni, Fan Yang, Shiva Pentyala, Mengnan Du, Yi Liu, Nic Lupfer, Xia Hu, Shuiwang Ji, Eric Ragan
Combating fake news and misinformation propagation is a challenging task in the post-truth era.
no code implementations • 29 Jun 2020 • Qingquan Song, Dehua Cheng, Hanning Zhou, Jiyan Yang, Yuandong Tian, Xia Hu
Click-Through Rate (CTR) prediction is one of the most important machine learning tasks in recommender systems, driving personalized experience for billions of consumers.
1 code implementation • 26 Jun 2020 • Ting-Hsiang Wang, Qingquan Song, Xiaotian Han, Zirui Liu, Haifeng Jin, Xia Hu
To address the need, we present AutoRec, an open-source automated machine learning (AutoML) platform extended from the TensorFlow ecosystem and, to our knowledge, the first framework to leverage AutoML for model search and hyperparameter tuning in deep recommendation models.
1 code implementation • 26 Jun 2020 • Kwei-Herng Lai, Daochen Zha, Kaixiong Zhou, Xia Hu
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
no code implementations • 19 Jun 2020 • Yuening Li, Zhengzhang Chen, Daochen Zha, Kaixiong Zhou, Haifeng Jin, Haifeng Chen, Xia Hu
Outlier detection is an important data mining task with numerous practical applications such as intrusion detection, credit card fraud detection, and video surveillance.
no code implementations • 16 Jun 2020 • Xia Hu, Weiqing Liu, Jiang Bian, Jian Pei
Our results demonstrate that the occurrence of overfitting is positively correlated with the increase of model complexity during training.
1 code implementation • 15 Jun 2020 • Ruixiang Tang, Mengnan Du, Yuening Li, Zirui Liu, Na Zou, Xia Hu
Image captioning has made substantial progress with huge supporting image collections sourced from the web.
1 code implementation • 15 Jun 2020 • Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu
In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers.
1 code implementation • NeurIPS 2020 • Kaixiong Zhou, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, Xia Hu
Graph neural networks (GNNs), which learn the representation of a node by aggregating its neighbors, have become an effective computational tool in downstream applications.
1 code implementation • 7 Jun 2020 • Kwei-Herng Lai, Daochen Zha, Yuening Li, Xia Hu
In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment and extract knowledge from each other to enhance their learning.
no code implementations • 3 Jun 2020 • Hao Yuan, Jiliang Tang, Xia Hu, Shuiwang Ji
Furthermore, our experimental results indicate that the generated graphs can provide guidance on how to improve the trained GNNs.
no code implementations • 16 May 2020 • Zhengyang Wang, Xia Hu, Shuiwang Ji
On the other hand, iCapsNets explore a novel way to explain the model's general behavior, achieving global interpretability.
no code implementations • 23 Apr 2020 • Ninghao Liu, Mengnan Du, Ruocheng Guo, Huan Liu, Xia Hu
In this paper, we review recent work on adversarial attacks and defenses, particularly from the perspective of machine learning interpretation.
no code implementations • 12 Mar 2020 • Yuening Li, Daochen Zha, Praveen Kumar Venugopal, Na Zou, Xia Hu
Outlier detection is an important task for various data mining applications.
no code implementations • 4 Mar 2020 • Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, Xia Hu
In this work, we investigate the problem of hashing with graph neural networks (GNNs) for high quality retrieval, and propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
no code implementations • 17 Dec 2019 • Kaixiong Zhou, Qingquan Song, Xiao Huang, Daochen Zha, Na Zou, Xia Hu
To further improve the graph representation learning ability, hierarchical GNN has been explored.
1 code implementation • 4 Nov 2019 • Fan Yang, Zijian Zhang, Haofan Wang, Yuening Li, Xia Hu
XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.
8 code implementations • 10 Oct 2019 • Daochen Zha, Kwei-Herng Lai, Yuanpu Cao, Songyi Huang, Ruzhe Wei, Junyu Guo, Xia Hu
The goal of RLCard is to bridge reinforcement learning and imperfect information games, and push forward the research of reinforcement learning in domains with multiple agents, large state and action space, and sparse reward.
1 code implementation • 7 Oct 2019 • Yuening Li, Daochen Zha, Na Zou, Xia Hu
PyODDS is an end-to end Python system for outlier detection with database support.
9 code implementations • 3 Oct 2019 • Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, Xia Hu
Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions.
no code implementations • 2 Oct 2019 • Zijian Zhang, Fan Yang, Haofan Wang, Xia Hu
We introduce a new model-agnostic explanation technique which explains the prediction of any classifier called CLE.
1 code implementation • 1 Oct 2019 • Yijun Bian, Qingquan Song, Mengnan Du, Jun Yao, Huanhuan Chen, Xia Hu
Neural architecture search (NAS) is gaining more and more attention in recent years due to its flexibility and remarkable capability to reduce the burden of neural network design.
no code implementations • 25 Sep 2019 • Weijie Fu, Meng Wang, Mengnan Du, Ninghao Liu, Shijie Hao, Xia Hu
Existing local explanation methods provide an explanation for each decision of black-box classifiers, in the form of relevance scores of features according to their contributions.
no code implementations • 13 Sep 2019 • Mengnan Du, Shiva Pentyala, Yuening Li, Xia Hu
The analysis further shows that LAE outperforms the state-of-the-arts by 6. 52%, 12. 03%, and 3. 08% respectively on three deepfake detection tasks in terms of generalization accuracy on previously unseen manipulations.
no code implementations • 7 Sep 2019 • Kaixiong Zhou, Qingquan Song, Xiao Huang, Xia Hu
First, the search space of GNN is different from the ones in existing NAS work.
Ranked #38 on Node Classification on Cora
no code implementations • 23 Aug 2019 • Mengnan Du, Fan Yang, Na Zou, Xia Hu
Deep learning is increasingly being used in high-stake decision making applications that affect individual lives.
no code implementations • 13 Aug 2019 • Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu
Recent explainability related studies have shown that state-of-the-art DNNs do not always adopt correct evidences to make decisions.
no code implementations • 11 Aug 2019 • Yuening Li, Ninghao Liu, Jundong Li, Mengnan Du, Xia Hu
To this end, we propose a novel deep structured anomaly detection framework to identify the cross-modal anomalies embedded in the data.
no code implementations • 21 Jul 2019 • Yi-Wei Chen, Qingquan Song, Xia Hu
Automated machine learning (AutoML) aims to find optimal machine learning solutions automatically given a machine learning problem.
no code implementations • 16 Jul 2019 • Fan Yang, Mengnan Du, Xia Hu
Interpretable Machine Learning (IML) has become increasingly important in many real-world applications, such as autonomous cars and medical diagnosis, where explanations are significantly preferred to help people better understand how machine learning systems work and further enhance their trust towards systems.
BIG-bench Machine Learning Interpretable Machine Learning +1
no code implementations • 8 Jul 2019 • Fan Yang, Shiva K. Pentyala, Sina Mohseni, Mengnan Du, Hao Yuan, Rhema Linder, Eric D. Ragan, Shuiwang Ji, Xia Hu
In this demo paper, we present the XFake system, an explainable fake news detector that assists end-users to identify news credibility.
no code implementations • 19 Jun 2019 • Daochen Zha, Kwei-Herng Lai, Kaixiong Zhou, Xia Hu
Experience replay enables reinforcement learning agents to memorize and reuse past experiences, just as humans replay memories for the situation at hand.
1 code implementation • 17 Jun 2019 • Zicun Cong, Lingyang Chu, Lanjun Wang, Xia Hu, Jian Pei
More and more AI services are provided through APIs on cloud where predictive models are hidden behind APIs.
1 code implementation • 11 Jun 2019 • Qingquan Song, Shiyu Chang, Xia Hu
To bridge the gap, in this paper, we propose a Coupled Variational Recurrent Collaborative Filtering (CVRCF) framework based on the idea of Deep Bayesian Learning to handle the streaming recommendation problem.
3 code implementations • 31 May 2019 • Jiaxu Cui, Bo Yang, Xia Hu
Attributed graphs, which contain rich contextual features beyond just network structure, are ubiquitous and have been observed to benefit various network analytics applications.
1 code implementation • 25 May 2019 • Ninghao Liu, Qiaoyu Tan, Yuening Li, Hongxia Yang, Jingren Zhou, Xia Hu
Network embedding models are powerful tools in mapping nodes in a network into continuous vector-space representations in order to facilitate subsequent tasks such as classification and link prediction.
no code implementations • 18 Apr 2019 • Qiaoyu Tan, Ninghao Liu, Xia Hu
First, we introduce the basic models for learning node representations in homogeneous networks.
no code implementations • 4 Apr 2019 • Sina Mohseni, Eric Ragan, Xia Hu
Combating fake news needs a variety of defense methods.
no code implementations • 27 Mar 2019 • Mengnan Du, Ninghao Liu, Fan Yang, Shuiwang Ji, Xia Hu
REAT decomposes the final prediction of a RNN into additive contribution of each word in the input text.
no code implementations • 2 Jan 2019 • Qingquan Song, Haifeng Jin, Xiao Huang, Xia Hu
Experiments on real-world multi-label image classification and ranking problems demonstrate the effectiveness of our proposed frameworks and provide insights of the vulnerability of multi-label deep learning models under diverse targeted attacking strategies.
no code implementations • 2018 IEEE International Conference on Big Knowledge (ICBK) 2018 • Haifeng Jin, Qingquan Song, Xia Hu
Moreover, the learned vector representations are not in a smooth space since the values can only be integers.
Ranked #12 on Graph Classification on PTC
no code implementations • 31 Jul 2018 • Mengnan Du, Ninghao Liu, Xia Hu
Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these models arrive at a particular decision.
14 code implementations • 27 Jun 2018 • Haifeng Jin, Qingquan Song, Xia Hu
In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search.
no code implementations • 28 May 2018 • Yang Yang, Haoyan Liu, Xia Hu, Jiawei Zhang, Xiao-Ming Zhang, Zhoujun Li, Philip S. Yu
The number of missing people (i. e., people who get lost) greatly increases in recent years.
no code implementations • 19 Mar 2018 • Mengnan Du, Ninghao Liu, Qingquan Song, Xia Hu
While deep neural networks (DNN) have become an effective computational tool, the prediction results are often criticized by the lack of interpretability, which is essential in many real-world applications such as health informatics.
no code implementations • 17 Feb 2018 • Lingyang Chu, Xia Hu, Juhua Hu, Lanjun Wang, Jian Pei
Strong intelligent machines powered by deep neural networks are increasingly deployed as black boxes to make decisions in risk-sensitive domains, such as finance and medical.
no code implementations • 28 Nov 2017 • Qingquan Song, Hancheng Ge, James Caverlee, Xia Hu
Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors.
no code implementations • 28 Nov 2017 • Ninghao Liu, Donghwa Shin, Xia Hu
Outlier detection plays an essential role in many data-driven applications to identify isolated instances that are different from the majority.
no code implementations • 26 Aug 2017 • Kui Zhao, Xia Hu, Jiajun Bu, Can Wang
In order to answer these kinds of questions, we attempt to model human sense of style compatibility in this paper.
43 code implementations • WWW 2017 • Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, Tat-Seng Chua
When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items.
no code implementations • 6 Jun 2017 • Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu
To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.
no code implementations • 14 Aug 2016 • Zhangyang Wang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, Thomas S. Huang
With the agreement of my coauthors, I Zhangyang Wang would like to withdraw the manuscript "Stacked Approximated Regression Machine: A Simple Deep Learning Approach".