Search Results for author: Yisen Wang

Found 68 papers, 30 papers with code

Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

1 code implementation CVPR 2022 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency.

A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training

no code implementations ICLR 2022 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.

Contrastive Learning

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

1 code implementation25 Mar 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Model Selection +1

Self-Ensemble Adversarial Training for Improved Robustness

1 code implementation ICLR 2022 Hongjun Wang, Yisen Wang

In this work, we are dedicated to the weight states of models through the training process and devise a simple but powerful \emph{Self-Ensemble Adversarial Training} (SEAT) method for yielding a robust classifier by averaging weights of history models.

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness

1 code implementation NeurIPS 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Gauge Equivariant Transformer

no code implementations NeurIPS 2021 Lingshen He, Yiming Dong, Yisen Wang, DaCheng Tao, Zhouchen Lin

Attention mechanism has shown great performance and efficiency in a lot of deep learning models, in which relative position encoding plays a crucial role.

Morié Attack (MA): A New Potential Risk of Screen Photos

1 code implementation NeurIPS 2021 Dantong Niu, Ruohao Guo, Yisen Wang

Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs).

Efficient Equivariant Network

no code implementations NeurIPS 2021 Lingshen He, Yuxuan Chen, Zhengyang Shen, Yiming Dong, Yisen Wang, Zhouchen Lin

Group equivariant CNNs (G-CNNs) that incorporate more equivariance can significantly improve the performance of conventional CNNs.

Clustering Effect of (Linearized) Adversarial Robust Models

1 code implementation25 Nov 2021 Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang

Adversarial robustness has received increasing attention along with the study of adversarial examples.

Adversarial Robustness Domain Adaptation

Fooling Adversarial Training with Inducing Noise

no code implementations19 Nov 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

1 code implementation NeurIPS 2021 Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, Yisen Wang

In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack.

Hard-label Attack

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation5 Nov 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Residual Relaxation for Multi-view Representation Learning

no code implementations NeurIPS 2021 Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.

Data Augmentation Representation Learning

Adversarial Neuron Pruning Purifies Backdoored Deep Models

2 code implementations NeurIPS 2021 Dongxian Wu, Yisen Wang

As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular.

Moiré Attack (MA): A New Potential Risk of Screen Photos

1 code implementation20 Oct 2021 Dantong Niu, Ruohao Guo, Yisen Wang

Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs).

Generalization in Deep RL for TSP Problems via Equivariance and Local Search

no code implementations7 Oct 2021 Wenbin Ouyang, Yisen Wang, Paul Weng, Shaochen Han

Since training on large instances is impractical, we design a novel deep RL approach with a focus on generalizability.

reinforcement-learning

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

Improving Generalization of Deep Reinforcement Learning-based TSP Solvers

no code implementations6 Oct 2021 Wenbin Ouyang, Yisen Wang, Shaochen Han, Zhejian Jin, Paul Weng

In this work, we propose a novel approach named MAGIC that includes a deep learning architecture and a DRL training method.

reinforcement-learning

Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State

1 code implementation NeurIPS 2021 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin

In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation.

Chaos is a Ladder: A New Understanding of Contrastive Learning

no code implementations ICLR 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Self-Supervised Learning

Domain-wise Adversarial Training for Out-of-Distribution Generalization

no code implementations29 Sep 2021 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.

Out-of-Distribution Generalization

Understanding Graph Learning with Local Intrinsic Dimensionality

no code implementations29 Sep 2021 Xiaojun Guo, Xingjun Ma, Yisen Wang

Many real-world problems can be formulated as graphs and solved by graph learning techniques.

Graph Learning

Optimization inspired Multi-Branch Equilibrium Models

no code implementations ICLR 2022 Mingjie Li, Yisen Wang, Xingyu Xie, Zhouchen Lin

Works have shown the strong connections between some implicit models and optimization problems.

Fooling Adversarial Training with Induction Noise

no code implementations29 Sep 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks

no code implementations29 Sep 2021 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin

Among them, the recently proposed method, implicit differentiation on the equilibrium state (IDE), for training feedback SNNs is a promising way that is possible for generalization to locally spike-based learning with flexible network structures.

Towards Understanding Catastrophic Overfitting in Fast Adversarial Training

no code implementations29 Sep 2021 Renjie Chen, Yuan Luo, Yisen Wang

After adversarial training was proposed, a series of works focus on improving the compunational efficiency of adversarial training for deep neural networks (DNNs).

Dissecting Local Properties of Adversarial Examples

no code implementations29 Sep 2021 Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang

Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.

Adversarial Robustness

Certified Adversarial Robustness Under the Bounded Support Set

no code implementations29 Sep 2021 Yiwen Kou, Qinyuan Zheng, Yisen Wang

In this paper, we introduce a framework that is able to deal with robustness properties of arbitrary smoothing measures including those with bounded support set by using Wasserstein distance as well as total variation distance.

Adversarial Robustness

Reparameterized Sampling for Generative Adversarial Networks

1 code implementation1 Jul 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Demystifying Adversarial Training via A Unified Probabilistic Framework

no code implementations ICML Workshop AML 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.

Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions

no code implementations ICML Workshop AML 2021 Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Leveraged Weighted Loss for Partial Label Learning

1 code implementation10 Jun 2021 Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, Zhouchen Lin

As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true.

Partial Label Learning

GBHT: Gradient Boosting Histogram Transform for Density Estimation

no code implementations10 Jun 2021 Jingyi Cui, Hanyuan Hang, Yisen Wang, Zhouchen Lin

In this paper, we propose a density estimation algorithm called \textit{Gradient Boosting Histogram Transform} (GBHT), where we adopt the \textit{Negative Log Likelihood} as the loss function to make the boosting procedure available for the unsupervised tasks.

Anomaly Detection Density Estimation +1

Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?

no code implementations5 Jun 2021 Dinghuai Zhang, Kartik Ahuja, Yilun Xu, Yisen Wang, Aaron Courville

Can models with particular structure avoid being biased towards spurious correlation in out-of-distribution (OOD) generalization?

Out-of-Distribution Generalization

Analysis and Applications of Class-wise Robustness in Adversarial Training

no code implementations29 May 2021 Qi Tian, Kun Kuang, Kelu Jiang, Fei Wu, Yisen Wang

Adversarial training is one of the most effective approaches to improve model robustness against adversarial examples.

Optimization Induced Equilibrium Networks

no code implementations27 May 2021 Xingyu Xie, Qiuhao Wang, Zenan Ling, Xia Li, Yisen Wang, Guangcan Liu, Zhouchen Lin

In this paper, we investigate an emerging question: can an implicit equilibrium model's equilibrium point be regarded as the solution of an optimization problem?

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation12 Mar 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Improving Adversarial Robustness via Channel-wise Activation Suppressing

1 code implementation ICLR 2021 Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).

Adversarial Robustness

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

no code implementations18 Jan 2021 Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang Jiang

In this paper, we focus on image classification and propose a method to visualize and understand the class-wise knowledge (patterns) learned by DNNs under three different settings including natural, backdoor and adversarial.

Image Classification

Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions

no code implementations17 Jan 2021 Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Intriguing class-wise properties of adversarial training

no code implementations1 Jan 2021 Qi Tian, Kun Kuang, Fei Wu, Yisen Wang

Adversarial training is one of the most effective approaches to improve model robustness against adversarial examples.

Adversarial Robustness

Towards A Unified Understanding and Improving of Adversarial Transferability

no code implementations ICLR 2021 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Efficient Sampling for Generative Adversarial Networks with Coupling Markov Chains

no code implementations1 Jan 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

A Unified Approach to Interpreting and Boosting Adversarial Transferability

1 code implementation8 Oct 2020 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Improving Query Efficiency of Black-box Adversarial Attack

1 code implementation ECCV 2020 Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, Weiwei Guo

Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting).

Adversarial Attack

Temporal Calibrated Regularization for Robust Noisy Label Learning

no code implementations1 Jul 2020 Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-Tao Xia

Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.

Normalized Loss Functions for Deep Learning with Noisy Labels

4 code implementations ICML 2020 Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Ranked #24 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)

Learning with noisy labels

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

no code implementations ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

Adversarial Weight Perturbation Helps Robust Generalization

3 code implementations NeurIPS 2020 Dongxian Wu, Shu-Tao Xia, Yisen Wang

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years.

Adversarial Robustness

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

2 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Dirichlet Latent Variable Hierarchical Recurrent Encoder-Decoder in Dialogue Generation

no code implementations IJCNLP 2019 Min Zeng, Yisen Wang, Yuan Luo

Based on which, we further find that there is redundancy among the dimensions of latent variable, and the lengths and sentence patterns of the responses can be strongly correlated to each dimension of the latent variable.

Dialogue Generation

Symmetric Cross Entropy for Robust Learning with Noisy Labels

4 code implementations ICCV 2019 Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Learning with noisy labels

Joint Semantic Domain Alignment and Target Classifier Learning for Unsupervised Domain Adaptation

no code implementations10 Jun 2019 Dong-Dong Chen, Yisen Wang, Jin-Feng Yi, Zaiyi Chen, Zhi-Hua Zhou

Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner.

Unsupervised Domain Adaptation

Learning Deep Hidden Nonlinear Dynamics from Aggregate Data

no code implementations22 Jul 2018 Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha

Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.

Decoupled Networks

1 code implementation CVPR 2018 Weiyang Liu, Zhen Liu, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, James M. Rehg, Le Song

Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations.

Iterative Learning with Open-set Noisy Labels

1 code implementation CVPR 2018 Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia

We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

1 code implementation ICLR 2018 Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.

Adversarial Defense

Residual Convolutional CTC Networks for Automatic Speech Recognition

no code implementations24 Feb 2017 Yisen Wang, Xuejiao Deng, Songbai Pu, Zhiheng Huang

Furthermore, we introduce a CTC-based system combination, which is different from the conventional frame-wise senone-based one.

Automatic Speech Recognition

Unifying Decision Trees Split Criteria Using Tsallis Entropy

no code implementations25 Nov 2015 Yisen Wang, Chaobing Song, Shu-Tao Xia

In this paper, a Tsallis Entropy Criterion (TEC) algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index, which generalizes the split criteria of decision trees.

Cannot find the paper you are looking for? You can Submit a new open access paper.