Search Results for author: Bang An

Found 17 papers, 9 papers with code

Referee-Meta-Learning for Fast Adaptation of Locational Fairness

no code implementations20 Feb 2024 Weiye Chen, Yiqun Xie, Xiaowei Jia, Erhu He, Han Bao, Bang An, Xun Zhou

When dealing with data from distinct locations, machine learning algorithms tend to demonstrate an implicit preference of some locations over the others, which constitutes biases that sabotage the spatial fairness of the algorithm.

Decision Making Fairness +1

Benchmarking the Robustness of Image Watermarks

1 code implementation16 Jan 2024 Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, ChengHao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang

We present WAVES (Watermark Analysis Via Enhanced Stress-testing), a novel benchmark for assessing watermark robustness, overcoming the limitations of current evaluation methods. WAVES integrates detection and identification tasks, and establishes a standardized evaluation protocol comprised of a diverse range of stress tests.

Benchmarking

Explore Spurious Correlations at the Concept Level in Language Models for Text Classification

no code implementations15 Nov 2023 YuHang Zhou, Paiheng Xu, Xiaoyu Liu, Bang An, Wei Ai, Furong Huang

We find that LMs, when encountering spurious correlations between a concept and a label in training or prompts, resort to shortcuts for predictions.

counterfactual In-Context Learning +2

C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder

1 code implementation NeurIPS 2023 Xiaoyu Liu, Jiaxin Yuan, Bang An, Yuancheng Xu, Yifan Yang, Furong Huang

Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i. e., sources of variation) and aims to discover them in the latent space.

Disentanglement Inductive Bias

AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models

1 code implementation23 Oct 2023 Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Zichao Wang, Furong Huang, Ani Nenkova, Tong Sun

Safety alignment of Large Language Models (LLMs) can be compromised with manual jailbreak attacks and (automatic) adversarial attacks.

Adversarial Attack Blocking

Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication

no code implementations4 Oct 2023 Zhe Zhao, Qingyun Liu, Huan Gui, Bang An, Lichan Hong, Ed H. Chi

In this paper, we extend KD with an interactive communication process to help students of downstream tasks learn effectively from pre-trained foundation models.

Knowledge Distillation Transfer Learning

AceGPT, Localizing Large Language Models in Arabic

1 code implementation21 Sep 2023 Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu

This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models.

Instruction Following Language Modelling +2

PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts

1 code implementation2 Aug 2023 Bang An, Sicheng Zhu, Michael-Andrei Panaitescu-Liess, Chaithanya Kumar Mummadi, Furong Huang

Inspired by it, we observe that providing CLIP with contextual attributes improves zero-shot image classification and mitigates reliance on spurious features.

Classification Image Classification +4

GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint

no code implementations25 May 2023 Paiheng Xu, YuHang Zhou, Bang An, Wei Ai, Furong Huang

Given the growing concerns about fairness in machine learning and the impressive performance of Graph Neural Networks (GNNs) on graph data learning, algorithmic fairness in GNNs has attracted significant attention.

Fairness Link Prediction

On the Possibilities of AI-Generated Text Detection

no code implementations10 Apr 2023 Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu, Bang An, Dinesh Manocha, Furong Huang

Our work addresses the critical issue of distinguishing text generated by Large Language Models (LLMs) from human-produced text, a task essential for numerous applications.

Text Detection

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

1 code implementation26 Jun 2022 Bang An, Zora Che, Mucong Ding, Furong Huang

In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse.

Fairness

HintNet: Hierarchical Knowledge Transfer Networks for Traffic Accident Forecasting on Heterogeneous Spatio-Temporal Data

1 code implementation7 Mar 2022 Bang An, Amin Vahedian, Xun Zhou, W. Nick Street, Yanhua Li

However, this problem is challenging due to the spatial heterogeneity of the environment and the sparsity of accidents in space and time.

Management Transfer Learning

Understanding the Generalization Benefit of Model Invariance from a Data Perspective

1 code implementation NeurIPS 2021 Sicheng Zhu, Bang An, Furong Huang

Based on this notion, we refine the generalization bound for invariant models and characterize the suitability of a set of data transformations by the sample covering number induced by transformations, i. e., the smallest size of its induced sample covers.

Generalization Bounds

Adaptive Transfer Learning on Graph Neural Networks

1 code implementation19 Jul 2021 Xueting Han, Zhenhuan Huang, Bang An, Jing Bai

We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task.

Meta-Learning Multi-Task Learning

Guess First to Enable Better Compression and Adversarial Robustness

no code implementations10 Jan 2020 Sicheng Zhu, Bang An, Shiyu Niu

Machine learning models are generally vulnerable to adversarial examples, which is in contrast to the robustness of humans.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.