Search Results for author: Tianyu Pang

Found 43 papers, 36 papers with code

LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition

1 code implementation25 Jul 2023 Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, Min Lin

Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks.

AdAM: Few-Shot Image Generation via Adaptation-Aware Kernel Modulation

no code implementations4 Jul 2023 Yunqing Zhao, Keshigeyan Chandrasegaran, Abdollahzadeh Milad, Chao Du, Tianyu Pang, Ruoteng Li, Henghui Ding, Ngai-Man Cheung

However, a major limitation of existing methods is that their knowledge preserving criteria consider only source domain/task and fail to consider target domain/adaptation in selecting source knowledge, casting doubt on their suitability for setups of different proximity between source and target domain.

Domain Adaptation Image Generation

A Closer Look at the Adversarial Robustness of Deep Equilibrium Models

1 code implementation2 Jun 2023 Zonghan Yang, Tianyu Pang, Yang Liu

Deep equilibrium models (DEQs) refrain from the traditional layer-stacking paradigm and turn to find the fixed point of a single layer.

Adversarial Defense Adversarial Robustness

Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics

1 code implementation2 Jun 2023 Zonghan Yang, Peng Li, Tianyu Pang, Yang Liu

To this end, we interpret DEQs through the lens of neural dynamics and find that AT under-regulates intermediate states.

Adversarial Robustness

Efficient Diffusion Policies for Offline Reinforcement Learning

1 code implementation31 May 2023 Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, Shuicheng Yan

2) It is incompatible with maximum likelihood-based RL algorithms (e. g., policy gradient methods) as the likelihood of diffusion models is intractable.

D4RL Offline RL +3

On Evaluating Adversarial Robustness of Large Vision-Language Models

1 code implementation26 May 2023 Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, Min Lin

Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented performance in response generation, especially with visual inputs, enabling more creative and adaptable interaction than large language models such as ChatGPT.

Adversarial Robustness multimodal generation +1

Nonparametric Generative Modeling with Conditional Sliced-Wasserstein Flows

2 code implementations3 May 2023 Chao Du, Tianbo Li, Tianyu Pang, Shuicheng Yan, Min Lin

Sliced-Wasserstein Flow (SWF) is a promising approach to nonparametric generative modeling but has not been widely adopted due to its suboptimal generative quality and lack of conditional modeling capabilities.

Exploring Incompatible Knowledge Transfer in Few-shot Image Generation

1 code implementation CVPR 2023 Yunqing Zhao, Chao Du, Milad Abdollahzadeh, Tianyu Pang, Min Lin, Shuicheng Yan, Ngai-Man Cheung

To this end, we propose knowledge truncation to mitigate this issue in FSIG, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.

Image Generation Transfer Learning

CoSDA: Continual Source-Free Domain Adaptation

1 code implementation13 Apr 2023 Haozhe Feng, Zhaorui Yang, Hesun Chen, Tianyu Pang, Chao Du, Minfeng Zhu, Wei Chen, Shuicheng Yan

Recently, SFDA has gained popularity due to the need to protect the data privacy of the source domain, but it suffers from catastrophic forgetting on the source domain due to the lack of data.

Source-Free Domain Adaptation

A Recipe for Watermarking Diffusion Models

1 code implementation17 Mar 2023 Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai-Man Cheung, Min Lin

In this regard, watermarking has been a proven solution for copyright protection and content monitoring, but it is underexplored in the DMs literature.

On Calibrating Diffusion Probabilistic Models

1 code implementation21 Feb 2023 Tianyu Pang, Cheng Lu, Chao Du, Min Lin, Shuicheng Yan, Zhijie Deng

In this work, we observe that the stochastic reverse process of data scores is a martingale, from which concentration bounds and the optional stopping theorem for data scores can be derived.

Better Diffusion Models Further Improve Adversarial Training

2 code implementations9 Feb 2023 Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, Shuicheng Yan

Under the $\ell_\infty$-norm threat model with $\epsilon=8/255$, our models achieve $70. 69\%$ and $42. 67\%$ robust accuracy on CIFAR-10 and CIFAR-100, respectively, i. e. improving upon previous state-of-the-art models by $+4. 58\%$ and $+8. 03\%$.

Denoising

Does Federated Learning Really Need Backpropagation?

1 code implementation28 Jan 2023 Haozhe Feng, Tianyu Pang, Chao Du, Wei Chen, Shuicheng Yan, Min Lin

BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments, because the clients in BAFFLE only execute forward propagation and return a set of scalars to the server.

Federated Learning Quantization

$O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks

no code implementations26 May 2022 Tianyu Pang, Shuicheng Yan, Min Lin

In this paper, we substitute the Slater determinant with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N^2)$.

Variational Monte Carlo

Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior

1 code implementation13 Mar 2022 Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu

However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

no code implementations9 Mar 2022 Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.

3D Face Modelling Face Recognition

Robustness and Accuracy Could Be Reconcilable by (Proper) Definition

1 code implementation21 Feb 2022 Tianyu Pang, Min Lin, Xiao Yang, Jun Zhu, Shuicheng Yan

The trade-off between robustness and accuracy has been widely studied in the adversarial literature.

Inductive Bias

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

no code implementations13 Oct 2021 Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu

The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.

Adversarial Robustness

Accumulative Poisoning Attacks on Real-time Data

1 code implementation NeurIPS 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.

Federated Learning

Exploring Memorization in Adversarial Training

1 code implementation ICLR 2022 Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu

In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.

Memorization

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart

1 code implementation CVPR 2022 Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu

Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.

Vocal Bursts Valence Prediction

Black-box Detection of Backdoor Attacks with Limited Information and Data

no code implementations ICCV 2021 Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.

Bag of Tricks for Adversarial Training

2 code implementations ICLR 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Adversarial training (AT) is one of the most effective strategies for promoting model robustness.

Adversarial Robustness Benchmarking

Efficient Learning of Generative Models via Finite-Difference Score Matching

1 code implementation NeurIPS 2020 Tianyu Pang, Kun Xu, Chongxuan Li, Yang song, Stefano Ermon, Jun Zhu

Several machine learning applications involve the optimization of higher-order derivatives (e. g., gradients of gradients) during training, which can be expensive in respect to memory and computation even with automatic differentiation.

Towards Face Encryption by Generating Adversarial Identity Masks

1 code implementation ICCV 2021 Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue

As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.

Face Recognition

Boosting Adversarial Training with Hypersphere Embedding

1 code implementation NeurIPS 2020 Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su

Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.

Representation Learning

Adversarial Distributional Training for Robust Deep Learning

1 code implementation NeurIPS 2020 Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu

Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.

Benchmarking Adversarial Robustness

no code implementations26 Dec 2019 Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu

Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.

Adversarial Attack Adversarial Robustness +2

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

1 code implementation ICLR 2020 Tianyu Pang, Kun Xu, Jun Zhu

Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.

Adversarial Robustness

Improving Black-box Adversarial Attacks with a Transfer-based Prior

2 code implementations NeurIPS 2019 Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

2 code implementations ICLR 2020 Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu

Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e. g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.

Adversarial Robustness

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

1 code implementation CVPR 2019 Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.

Translation

Improving Adversarial Robustness via Promoting Ensemble Diversity

6 code implementations25 Jan 2019 Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu

Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks.

Adversarial Robustness

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

BIG-bench Machine Learning

Max-Mahalanobis Linear Discriminant Analysis Networks

2 code implementations ICML 2018 Tianyu Pang, Chao Du, Jun Zhu

In this paper, we show that a properly designed classifier can improve robustness to adversarial attacks and lead to better prediction results.

Boosting Adversarial Attacks with Momentum

6 code implementations CVPR 2018 Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li

To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.

Adversarial Attack

Towards Robust Detection of Adversarial Examples

1 code implementation NeurIPS 2018 Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu

Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.