Search Results for author: Zhuolin Yang

Found 12 papers, 4 papers with code

Interpolation for Robust Learning: Data Augmentation on Wasserstein Geodesics

no code implementations4 Feb 2023 Jiacheng Zhu, JieLin Qiu, Aritra Guha, Zhuolin Yang, XuanLong Nguyen, Bo Li, Ding Zhao

Our work provides a new perspective of model robustness through the lens of Wasserstein geodesic-based interpolation with a practical off-the-shelf strategy that can be combined with existing robust training methods.

Data Augmentation

GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction

no code implementations2 Aug 2022 Jiacheng Zhu, JieLin Qiu, Zhuolin Yang, Douglas Weber, Michael A. Rosenberg, Emerson Liu, Bo Li, Ding Zhao

In this paper, we propose a physiologically-inspired data augmentation method to improve performance and increase the robustness of heart disease detection based on ECG signals.

Data Augmentation

On the Certified Robustness for Ensemble Models and Beyond

no code implementations ICLR 2022 Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li

Thus, to explore the conditions that guarantee to provide certifiably robust ensemble ML models, we first prove that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption.

TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness

no code implementations NeurIPS 2021 Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin I. P. Rubinstein, Ce Zhang, Bo Li

To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness

1 code implementation NeurIPS 2021 Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubinstein, Pan Zhou, Ce Zhang, Bo Li

To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.

Understanding Robustness in Teacher-Student Setting: A New Perspective

no code implementations25 Feb 2021 Zhuolin Yang, Zhaoxi Chen, Tiffany Cai, Xinyun Chen, Bo Li, Yuandong Tian

Extensive experiments show that student specialization correlates strongly with model robustness in different scenarios, including student trained via standard training, adversarial training, confidence-calibrated adversarial training, and training with robust feature dataset.

BIG-bench Machine Learning Data Augmentation

Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability

2 code implementations25 Jun 2020 Kaizhao Liang, Jacky Y. Zhang, Boxin Wang, Zhuolin Yang, Oluwasanmi Koyejo, Bo Li

Knowledge transferability, or transfer learning, has been widely adopted to allow a pre-trained model in the source domain to be effectively adapted to downstream tasks in the target domain.

Transfer Learning

Improving Certified Robustness via Statistical Learning with Logical Reasoning

1 code implementation28 Feb 2020 Zhuolin Yang, Zhikuan Zhao, Boxin Wang, Jiawei Zhang, Linyi Li, Hengzhi Pei, Bojan Karlas, Ji Liu, Heng Guo, Ce Zhang, Bo Li

Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently.

BIG-bench Machine Learning Logical Reasoning

G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators

2 code implementations NeurIPS 2021 Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl A. Gunter, Bo Li

In particular, we train a student data generator with an ensemble of teacher discriminators and propose a novel private gradient aggregation mechanism to ensure differential privacy on all information that flows from teacher discriminators to the student generator.

BIG-bench Machine Learning Privacy Preserving

Characterizing Audio Adversarial Examples Using Temporal Dependency

no code implementations ICLR 2019 Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song

In particular, our results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples.

Adversarial Defense Automatic Speech Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.