Search Results for author: YaoLiang Yu

Found 24 papers, 12 papers with code

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors

no code implementations20 Feb 2024 Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath, YaoLiang Yu

In this paper, we extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors.

Data Poisoning Domain Adaptation +2

$f$-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning

no code implementations15 Feb 2024 Yiwei Lu, Guojun Zhang, Sun Sun, Hongyu Guo, YaoLiang Yu

In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual information.

Contrastive Learning

Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks

1 code implementation7 Mar 2023 Yiwei Lu, Gautam Kamath, YaoLiang Yu

Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing indiscriminate data poisoning baselines over a range of datasets and models.

Data Poisoning Model Poisoning +1

DP$^2$-VAE: Differentially Private Pre-trained Variational Autoencoders

no code implementations5 Aug 2022 Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu

Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.

Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers

no code implementations31 Jul 2022 Ji Xin, Raphael Tang, Zhiying Jiang, YaoLiang Yu, Jimmy Lin

There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc.

Quantization

Mitigating Data Heterogeneity in Federated Learning with Data Augmentation

1 code implementation20 Jun 2022 Artur Back de Luca, Guojun Zhang, Xi Chen, YaoLiang Yu

Federated Learning (FL) is a prominent framework that enables training a centralized model while securing user privacy by fusing local, decentralized models.

Data Augmentation Domain Generalization +1

Towards Explanation for Unsupervised Graph-Level Representation Learning

1 code implementation20 May 2022 Qinghua Zheng, Jihong Wang, Minnan Luo, YaoLiang Yu, Jundong Li, Lina Yao, Xiaojun Chang

Due to the superior performance of Graph Neural Networks (GNNs) in various domains, there is an increasing interest in the GNN explanation problem "\emph{which fraction of the input graph is the most crucial to decide the model's decision?}"

Decision Making Graph Classification +2

Indiscriminate Data Poisoning Attacks on Neural Networks

1 code implementation19 Apr 2022 Yiwei Lu, Gautam Kamath, YaoLiang Yu

Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention.

Data Poisoning

Proportional Fairness in Federated Learning

1 code implementation3 Feb 2022 Guojun Zhang, Saber Malekmohammadi, Xi Chen, YaoLiang Yu

With the increasingly broad deployment of federated learning (FL) systems in the real world, it is critical but challenging to ensure fairness in FL, i. e. reasonably satisfactory performances for each of the numerous diverse clients.

Fairness Federated Learning

Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training

no code implementations NeurIPS 2021 Shangshu Qian, Hung Pham, Thibaud Lutellier, Zeou Hu, Jungwon Kim, Lin Tan, YaoLiang Yu, Jiahao Chen, Sameena Shah

Our study of 22 mitigation techniques and five baselines reveals up to 12. 6% fairness variance across identical training runs with identical seeds.

Crime Prediction Fairness

Demystifying and Generalizing BinaryConnect

no code implementations NeurIPS 2021 Tim Dockhorn, YaoLiang Yu, Eyyüb Sari, Mahdi Zolnouri, Vahid Partovi Nia

BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization.

Quantization

Conditional Generative Quantile Networks via Optimal Transport and Convex Potentials

no code implementations29 Sep 2021 Jesse Sun, Dihong Jiang, YaoLiang Yu

Quantile regression has a natural extension to generative modelling by leveraging a stronger convergence in pointwise rather than in distribution.

$f$-Mutual Information Contrastive Learning

no code implementations29 Sep 2021 Guojun Zhang, Yiwei Lu, Sun Sun, Hongyu Guo, YaoLiang Yu

Self-supervised contrastive learning is an emerging field due to its power in providing good data representations.

Contrastive Learning

An Operator Splitting View of Federated Learning

no code implementations12 Aug 2021 Saber Malekmohammadi, Kiarash Shaloudegi, Zeou Hu, YaoLiang Yu

Over the past few years, the federated learning ($\texttt{FL}$) community has witnessed a proliferation of new $\texttt{FL}$ algorithms.

Federated Learning

The Art of Abstention: Selective Prediction and Error Regularization for Natural Language Processing

1 code implementation ACL 2021 Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin

To fill this void in the literature, we study in this paper selective prediction for NLP, comparing different models and confidence estimators.

$S^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

1 code implementation NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy efficient compared to conventional neural networks.

Quantifying and Improving Transferability in Domain Generalization

2 code implementations NeurIPS 2021 Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart

We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.

Domain Generalization Out-of-Distribution Generalization

S$^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

no code implementations NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks.

BERxiT: Early Exiting for BERT with Better Fine-Tuning and Extension to Regression

1 code implementation EACL 2021 Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin

The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency.

regression

Posterior Differential Regularization with f-divergence for Improving Model Robustness

2 code implementations NAACL 2021 Hao Cheng, Xiaodong Liu, Lis Pereira, YaoLiang Yu, Jianfeng Gao

Theoretically, we provide a connection of two recent methods, Jacobian Regularization and Virtual Adversarial Training, under this framework.

Domain Generalization

OLALA: Object-Level Active Learning for Efficient Document Layout Annotation

1 code implementation5 Oct 2020 Zejiang Shen, Jian Zhao, Melissa Dell, YaoLiang Yu, Weining Li

Document images often have intricate layout structures, with numerous content regions (e. g. texts, figures, tables) densely arranged on each page.

Active Learning Object +1

Efficient Structured Matrix Rank Minimization

no code implementations NeurIPS 2014 Adams Wei Yu, Wanli Ma, YaoLiang Yu, Jaime Carbonell, Suvrit Sra

We study the problem of finding structured low-rank matrices using nuclear norm regularization where the structure is encoded by a linear map.

Cannot find the paper you are looking for? You can Submit a new open access paper.