no code implementations • 31 Dec 2024 • Chia-Yuan Chang, Zhimeng Jiang, Vineeth Rakesh, Menghai Pan, Chin-Chia Michael Yeh, Guanchu Wang, Mingzhi Hu, Zhichao Xu, Yan Zheng, Mahashweta Das, Na Zou
Large Language Models (LLMs) are becoming essential tools for various natural language processing tasks but often suffer from generating outdated or incorrect information.
no code implementations • 15 Dec 2024 • Zhengyu Fang, Zhimeng Jiang, Huiyuan Chen, Xiao Li, Jing Li
In this paper, we conduct the first comprehensive investigation of memorization phenomena in diffusion models for tabular data.
1 code implementation • 21 Oct 2024 • Zhimeng Jiang, Zirui Liu, Xiaotian Han, Qizhang Feng, Hongye Jin, Qiaoyu Tan, Kaixiong Zhou, Na Zou, Xia Hu
In this paper, we first observe the gradient of cross-entropy loss for the target node and training nodes with significant inconsistency, which indicates that directly fine-tuning the base model using the loss on the target node deteriorates the performance on training nodes.
1 code implementation • 6 Jun 2024 • Chengyu Lai, Sheng Zhou, Zhimeng Jiang, Qiaoyu Tan, Yuanchen Bei, Jiawei Chen, Ningyu Zhang, Jiajun Bu
This paper introduces a novel and significant task termed recommendation editing, which focuses on modifying known and unsuitable recommendation behaviors.
2 code implementations • 2 Jan 2024 • Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention.
no code implementations • 29 Dec 2023 • Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, Xia Hu
Here we propose a non-contrastive learning objective, named nCL, which explicitly mitigates dimensional collapse of representations in collaborative filtering.
1 code implementation • 19 Dec 2023 • Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali Mostafavi, Xia Hu
To this end, we aim to achieve fairness via a new GNN architecture.
no code implementations • 2 Oct 2023 • Chia-Yuan Chang, Yu-Neng Chuang, Zhimeng Jiang, Kwei-Herng Lai, Anxiao Jiang, Na Zou
In real-world applications, machine learning models often become obsolete due to shifts in the joint distribution arising from underlying temporal trends, a phenomenon known as the "concept drift".
no code implementations • 1 Oct 2023 • Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu
Our method progressively increases the training length throughout the pretraining phase, thereby mitigating computational costs and enhancing efficiency.
1 code implementation • 11 Jun 2023 • Hongyi Ling, Zhimeng Jiang, Meng Liu, Shuiwang Ji, Na Zou
We conduct systematic experiments to show that S-Mixup can improve the performance and generalization of graph neural networks (GNNs) on various graph classification tasks.
no code implementations • 24 May 2023 • Zirui Liu, Zhimeng Jiang, Shaochen Zhong, Kaixiong Zhou, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
However, model editing for graph neural networks (GNNs) is rarely explored, despite GNNs' widespread applicability.
1 code implementation • NeurIPS 2023 • Zirui Liu, Guanchu Wang, Shaochen Zhong, Zhaozhuo Xu, Daochen Zha, Ruixiang Tang, Zhimeng Jiang, Kaixiong Zhou, Vipin Chaudhary, Shuai Xu, Xia Hu
While the model parameters do contribute to memory usage, the primary memory bottleneck during training arises from storing feature maps, also known as activations, as they are crucial for gradient calculation.
10 code implementations • 17 Mar 2023 • Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, Xia Hu
Artificial Intelligence (AI) is making a profound impact in almost every domain.
1 code implementation • NeurIPS 2023 • Zhimeng Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR) by considering the worst case within the model weight perturbation ball for each sensitive attribute group.
1 code implementation • 31 Jan 2023 • Xiaotian Han, Zhimeng Jiang, Hongye Jin, Zirui Liu, Na Zou, Qifan Wang, Xia Hu
Unfortunately, in this paper, we reveal that the fairness metric $\Delta DP$ can not precisely measure the violation of demographic parity, because it inherently has the following drawbacks: i) zero-value $\Delta DP$ does not guarantee zero violation of demographic parity, ii) $\Delta DP$ values can vary with different classification thresholds.
1 code implementation • 6 Dec 2022 • Zhimeng Jiang, Kaixiong Zhou, Mi Zhang, Rui Chen, Xia Hu, Soo-Hyun Choi
In this work, we explicitly factor in the uncertainty of estimated ad impression values and model the risk preference of a DSP under a specific state and market environment via a sequential decision process.
no code implementations • 17 Oct 2022 • Han Xu, Menghai Pan, Zhimeng Jiang, Huiyuan Chen, Xiaoting Li, Mahashweta Das, Hao Yang
The existence of adversarial attacks (or adversarial examples) brings huge concern about the machine learning (ML) model's safety issues.
1 code implementation • 5 Aug 2022 • Guanchu Wang, Zirui Liu, Zhimeng Jiang, Ninghao Liu, Na Zou, Xia Hu
Activation compressed training provides a solution towards reducing the memory cost of training deep neural networks~(DNNs).
1 code implementation • 15 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i. e., graphon) of different classes of graphs.
3 code implementations • 14 Feb 2022 • Guanchu Wang, Zaid Pervaiz Bhat, Zhimeng Jiang, Yi-Wei Chen, Daochen Zha, Alfredo Costilla Reyes, Afshin Niktash, Gorkem Ulkar, Erman Okman, Xuanting Cai, Xia Hu
DNNs have been an effective tool for data processing and analysis.
no code implementations • 13 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
no code implementations • 8 Feb 2022 • Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali Mostafavi, Xia Hu
Despite recent advances in achieving fair representations and predictions through regularization, adversarial debiasing, and contrastive learning in graph neural networks (GNNs), the working mechanism (i. e., message passing) behind GNNs inducing unfairness issue remains unknown.
no code implementations • 29 Sep 2021 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i. e., graphon) of different classes of graphs.
1 code implementation • ICLR 2022 • Zhimeng Jiang, Xiaotian Han, Chao Fan, Fan Yang, Ali Mostafavi, Xia Hu
We show the understanding of GDP from the probability perspective and theoretically reveal the connection between GDP regularizer and adversarial debiasing.
no code implementations • ICLR 2022 • Zhimeng Jiang, Kaixiong Zhou, Zirui Liu, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
Instance-dependent label noise (IDN) widely exists in real-world datasets and usually misleads the training of deep neural networks.