Search Results for author: Yisen Wang

Found 94 papers, 50 papers with code

FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMs

no code implementations20 Mar 2024 Jinmin Li, Kuofeng Gao, Yang Bai, Jingyun Zhang, Shu-Tao Xia, Yisen Wang

Despite the remarkable performance of video-based large language models (LLMs), their adversarial threat remains unexplored.

Adversarial Attack

Non-negative Contrastive Learning

1 code implementation19 Mar 2024 Yifei Wang, Qi Zhang, Yaoyu Guo, Yisen Wang

In this paper, we propose Non-negative Contrastive Learning (NCL), a renaissance of Non-negative Matrix Factorization (NMF) aimed at deriving interpretable features.

Contrastive Learning Disentanglement +1

Do Generated Data Always Help Contrastive Learning?

1 code implementation19 Mar 2024 Yifei Wang, Jizhe Zhang, Yisen Wang

Contrastive Learning (CL) has emerged as one of the most successful paradigms for unsupervised visual representation learning, yet it often depends on intensive manual data augmentations.

Contrastive Learning Data Augmentation +2

Studious Bob Fight Back Against Jailbreaking via Prompt Adversarial Tuning

no code implementations9 Feb 2024 Yichuan Mo, Yuji Wang, Zeming Wei, Yisen Wang

To our knowledge, we are the first to implement defense from the perspective of prompt tuning.

Adversarial Examples Are Not Real Features

1 code implementation NeurIPS 2023 Ang Li, Yifei Wang, Yiwen Guo, Yisen Wang

A well-known theory by \citet{ilyas2019adversarial} explains adversarial vulnerability from a data perspective by showing that one can extract non-robust features from adversarial examples and these features alone are useful for classification.

Contrastive Learning Self-Supervised Learning

Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding

3 code implementations NeurIPS 2023 Jiangyan Ma, Yifei Wang, Yisen Wang

However, from a theoretical perspective, the universal expressive power of spectral embedding comes at the price of losing two important invariance properties of graphs, sign and basis invariance, which also limits its effectiveness on graph data.

Graph Classification Graph Embedding +1

Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations

no code implementations10 Oct 2023 Zeming Wei, Yifei Wang, Yisen Wang

Large Language Models (LLMs) have shown remarkable success in various tasks, but concerns about their safety and the potential for generating malicious content have emerged.

In-Context Learning Language Modelling

Robust Long-Tailed Learning via Label-Aware Bounded CVaR

no code implementations29 Aug 2023 Hong Zhu, Runpeng Yu, Xing Tang, Yifei Wang, Yuan Fang, Yisen Wang

Data in the real-world classification problems are always imbalanced or long-tailed, wherein the majority classes have the most of the samples that dominate the model training.

On the Generalization of Multi-modal Contrastive Learning

1 code implementation7 Jun 2023 Qi Zhang, Yifei Wang, Yisen Wang

Multi-modal contrastive learning (MMCL) has recently garnered considerable interest due to its superior performance in visual tasks, achieved by embedding multi-modal data, such as visual-language pairs.

Contrastive Learning

Rethinking Weak Supervision in Helping Contrastive Learning

no code implementations7 Jun 2023 Jingyi Cui, Weiran Huang, Yifei Wang, Yisen Wang

Therefore, to explore the mechanical differences between semi-supervised and noisy-labeled information in helping contrastive learning, we establish a unified theoretical framework of contrastive learning under weak supervision.

Contrastive Learning Denoising +1

CFA: Class-wise Calibrated Fair Adversarial Training

1 code implementation CVPR 2023 Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang

Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs).

Adversarial Robustness Fairness

Generalist: Decoupling Natural and Robust Generalization

1 code implementation CVPR 2023 Hongjun Wang, Yisen Wang

The parameters of base learners are collected and combined to form a global learner at intervals during the training process.

ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond

2 code implementations12 Mar 2023 Xiaojun Guo, Yifei Wang, Tianqi Du, Yisen Wang

Instead of characterizing oversmoothing from the view of complete collapse in which representations converge to a single point, we dive into a more general perspective of dimensional collapse in which representations lie in a narrow cone.

Contrastive Learning

A Message Passing Perspective on Learning Dynamics of Contrastive Learning

1 code implementation8 Mar 2023 Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang

In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.

Contrastive Learning Graph Attention +1

Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism

1 code implementation4 Mar 2023 Zhijian Zhuo, Yifei Wang, Jinwen Ma, Yisen Wang

In this work, we propose a unified theoretical understanding for existing variants of non-contrastive learning.

Contrastive Learning

ArCL: Enhancing Contrastive Learning with Augmentation-Robust Representations

no code implementations2 Mar 2023 Xuyang Zhao, Tianqi Du, Yisen Wang, Jun Yao, Weiran Huang

Moreover, we show that contrastive learning fails to learn domain-invariant features, which limits its transferability.

Contrastive Learning Data Augmentation +1

Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

1 code implementation2 Mar 2023 Rundong Luo, Yifei Wang, Yisen Wang

Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap.

Contrastive Learning Data Augmentation +1

Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

1 code implementation ICCV 2023 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In particular, our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.

SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks

1 code implementation1 Feb 2023 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin

In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method, implicit differentiation on the equilibrium state (IDE), for supervised learning with purely spike-based computation, which demonstrates the potential for energy-efficient training of SNNs.

On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization

no code implementations18 Dec 2022 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.

Out-of-Distribution Generalization

How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders

2 code implementations15 Oct 2022 Qi Zhang, Yifei Wang, Yisen Wang

Masked Autoencoders (MAE) based on a reconstruction task have risen to be a promising paradigm for self-supervised learning (SSL) and achieve state-of-the-art performance across different benchmark datasets.

Contrastive Learning Self-Supervised Learning

When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

1 code implementation14 Oct 2022 Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang

We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.

Adversarial Robustness

Proving Common Mechanisms Shared by Twelve Methods of Boosting Adversarial Transferability

no code implementations24 Jul 2022 Quanshi Zhang, Xin Wang, Jie Ren, Xu Cheng, Shuyun Lin, Yisen Wang, Xiangming Zhu

This paper summarizes the common mechanism shared by twelve previous transferability-boosting methods in a unified view, i. e., these methods all reduce game-theoretic interactions between regional adversarial perturbations.

GeoSegNet: Point Cloud Semantic Segmentation via Geometric Encoder-Decoder Modeling

1 code implementation14 Jul 2022 Chen Chen, Yisen Wang, Honghua Chen, Xuefeng Yan, Dayong Ren, Yanwen Guo, Haoran Xie, Fu Lee Wang, Mingqiang Wei

Semantic segmentation of point clouds, aiming to assign each point a semantic category, is critical to 3D scene understanding. Despite of significant advances in recent years, most of existing methods still suffer from either the object-level misclassification or the boundary-level ambiguity.

Object Segmentation +1

Optimization-Induced Graph Implicit Nonlinear Diffusion

1 code implementation29 Jun 2022 Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well.

Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

1 code implementation CVPR 2022 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency.

A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training

no code implementations ICLR 2022 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.

Contrastive Learning

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

1 code implementation25 Mar 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Model Selection +1

Self-Ensemble Adversarial Training for Improved Robustness

1 code implementation ICLR 2022 Hongjun Wang, Yisen Wang

In this work, we are dedicated to the weight states of models through the training process and devise a simple but powerful \emph{Self-Ensemble Adversarial Training} (SEAT) method for yielding a robust classifier by averaging weights of history models.

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

Morié Attack (MA): A New Potential Risk of Screen Photos

1 code implementation NeurIPS 2021 Dantong Niu, Ruohao Guo, Yisen Wang

Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs).

Gauge Equivariant Transformer

no code implementations NeurIPS 2021 Lingshen He, Yiming Dong, Yisen Wang, DaCheng Tao, Zhouchen Lin

Attention mechanism has shown great performance and efficiency in a lot of deep learning models, in which relative position encoding plays a crucial role.

Position

Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness

1 code implementation NeurIPS 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Efficient Equivariant Network

1 code implementation NeurIPS 2021 Lingshen He, Yuxuan Chen, Zhengyang Shen, Yiming Dong, Yisen Wang, Zhouchen Lin

Group equivariant CNNs (G-CNNs) that incorporate more equivariance can significantly improve the performance of conventional CNNs.

Clustering Effect of (Linearized) Adversarial Robust Models

1 code implementation25 Nov 2021 Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang

Adversarial robustness has received increasing attention along with the study of adversarial examples.

Adversarial Robustness Clustering +1

Fooling Adversarial Training with Inducing Noise

no code implementations19 Nov 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

1 code implementation NeurIPS 2021 Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, Yisen Wang

In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack.

Hard-label Attack

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation5 Nov 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Residual Relaxation for Multi-view Representation Learning

no code implementations NeurIPS 2021 Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.

Data Augmentation Representation Learning

Adversarial Neuron Pruning Purifies Backdoored Deep Models

2 code implementations NeurIPS 2021 Dongxian Wu, Yisen Wang

As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular.

Moiré Attack (MA): A New Potential Risk of Screen Photos

1 code implementation20 Oct 2021 Dantong Niu, Ruohao Guo, Yisen Wang

Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs).

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

Generalization in Deep RL for TSP Problems via Equivariance and Local Search

no code implementations7 Oct 2021 Wenbin Ouyang, Yisen Wang, Paul Weng, Shaochen Han

Since training on large instances is impractical, we design a novel deep RL approach with a focus on generalizability.

reinforcement-learning Reinforcement Learning (RL)

Improving Generalization of Deep Reinforcement Learning-based TSP Solvers

no code implementations6 Oct 2021 Wenbin Ouyang, Yisen Wang, Shaochen Han, Zhejian Jin, Paul Weng

In this work, we propose a novel approach named MAGIC that includes a deep learning architecture and a DRL training method.

reinforcement-learning Reinforcement Learning (RL)

Dissecting Local Properties of Adversarial Examples

no code implementations29 Sep 2021 Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang

Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.

Adversarial Robustness

Optimization inspired Multi-Branch Equilibrium Models

no code implementations ICLR 2022 Mingjie Li, Yisen Wang, Xingyu Xie, Zhouchen Lin

Works have shown the strong connections between some implicit models and optimization problems.

Domain-wise Adversarial Training for Out-of-Distribution Generalization

no code implementations29 Sep 2021 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.

Out-of-Distribution Generalization

Understanding Graph Learning with Local Intrinsic Dimensionality

no code implementations29 Sep 2021 Xiaojun Guo, Xingjun Ma, Yisen Wang

Many real-world problems can be formulated as graphs and solved by graph learning techniques.

Graph Learning

Certified Adversarial Robustness Under the Bounded Support Set

no code implementations29 Sep 2021 Yiwen Kou, Qinyuan Zheng, Yisen Wang

In this paper, we introduce a framework that is able to deal with robustness properties of arbitrary smoothing measures including those with bounded support set by using Wasserstein distance as well as total variation distance.

Adversarial Robustness

Towards Understanding Catastrophic Overfitting in Fast Adversarial Training

no code implementations29 Sep 2021 Renjie Chen, Yuan Luo, Yisen Wang

After adversarial training was proposed, a series of works focus on improving the compunational efficiency of adversarial training for deep neural networks (DNNs).

Chaos is a Ladder: A New Understanding of Contrastive Learning

no code implementations ICLR 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Self-Supervised Learning

Fooling Adversarial Training with Induction Noise

no code implementations29 Sep 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State

1 code implementation NeurIPS 2021 Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin

In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation.

Reparameterized Sampling for Generative Adversarial Networks

1 code implementation1 Jul 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions

no code implementations ICML Workshop AML 2021 Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Demystifying Adversarial Training via A Unified Probabilistic Framework

no code implementations ICML Workshop AML 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.

Leveraged Weighted Loss for Partial Label Learning

1 code implementation10 Jun 2021 Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, Zhouchen Lin

As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true.

Partial Label Learning Weakly-supervised Learning

GBHT: Gradient Boosting Histogram Transform for Density Estimation

no code implementations10 Jun 2021 Jingyi Cui, Hanyuan Hang, Yisen Wang, Zhouchen Lin

In this paper, we propose a density estimation algorithm called \textit{Gradient Boosting Histogram Transform} (GBHT), where we adopt the \textit{Negative Log Likelihood} as the loss function to make the boosting procedure available for the unsupervised tasks.

Anomaly Detection Density Estimation +1

Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?

no code implementations5 Jun 2021 Dinghuai Zhang, Kartik Ahuja, Yilun Xu, Yisen Wang, Aaron Courville

Can models with particular structure avoid being biased towards spurious correlation in out-of-distribution (OOD) generalization?

Out-of-Distribution Generalization

Analysis and Applications of Class-wise Robustness in Adversarial Training

no code implementations29 May 2021 Qi Tian, Kun Kuang, Kelu Jiang, Fei Wu, Yisen Wang

Adversarial training is one of the most effective approaches to improve model robustness against adversarial examples.

Optimization Induced Equilibrium Networks

no code implementations27 May 2021 Xingyu Xie, Qiuhao Wang, Zenan Ling, Xia Li, Yisen Wang, Guangcan Liu, Zhouchen Lin

In this paper, we investigate an emerging question: can an implicit equilibrium model's equilibrium point be regarded as the solution of an optimization problem?

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation12 Mar 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Improving Adversarial Robustness via Channel-wise Activation Suppressing

1 code implementation ICLR 2021 Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).

Adversarial Robustness

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

no code implementations18 Jan 2021 Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang Jiang

In this paper, we focus on image classification and propose a method to visualize and understand the class-wise knowledge (patterns) learned by DNNs under three different settings including natural, backdoor and adversarial.

Image Classification

Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions

no code implementations17 Jan 2021 Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Towards A Unified Understanding and Improving of Adversarial Transferability

no code implementations ICLR 2021 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Efficient Sampling for Generative Adversarial Networks with Coupling Markov Chains

no code implementations1 Jan 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Intriguing class-wise properties of adversarial training

no code implementations1 Jan 2021 Qi Tian, Kun Kuang, Fei Wu, Yisen Wang

Adversarial training is one of the most effective approaches to improve model robustness against adversarial examples.

Adversarial Robustness

A Unified Approach to Interpreting and Boosting Adversarial Transferability

1 code implementation8 Oct 2020 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Improving Query Efficiency of Black-box Adversarial Attack

1 code implementation ECCV 2020 Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, Weiwei Guo

Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting).

Adversarial Attack

Temporal Calibrated Regularization for Robust Noisy Label Learning

no code implementations1 Jul 2020 Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-Tao Xia

Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.

Normalized Loss Functions for Deep Learning with Noisy Labels

4 code implementations ICML 2020 Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Ranked #30 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)

Learning with noisy labels

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

1 code implementation ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

Adversarial Weight Perturbation Helps Robust Generalization

3 code implementations NeurIPS 2020 Dongxian Wu, Shu-Tao Xia, Yisen Wang

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years.

Adversarial Robustness

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

3 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Dirichlet Latent Variable Hierarchical Recurrent Encoder-Decoder in Dialogue Generation

no code implementations IJCNLP 2019 Min Zeng, Yisen Wang, Yuan Luo

Based on which, we further find that there is redundancy among the dimensions of latent variable, and the lengths and sentence patterns of the responses can be strongly correlated to each dimension of the latent variable.

Dialogue Generation Sentence

Symmetric Cross Entropy for Robust Learning with Noisy Labels

4 code implementations ICCV 2019 Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Learning with noisy labels

Joint Semantic Domain Alignment and Target Classifier Learning for Unsupervised Domain Adaptation

no code implementations10 Jun 2019 Dong-Dong Chen, Yisen Wang, Jin-Feng Yi, Zaiyi Chen, Zhi-Hua Zhou

Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner.

Unsupervised Domain Adaptation

Learning Deep Hidden Nonlinear Dynamics from Aggregate Data

no code implementations22 Jul 2018 Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha

Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.

Decoupled Networks

1 code implementation CVPR 2018 Weiyang Liu, Zhen Liu, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, James M. Rehg, Le Song

Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations.

Iterative Learning with Open-set Noisy Labels

1 code implementation CVPR 2018 Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia

We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

1 code implementation ICLR 2018 Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.

Adversarial Defense

Residual Convolutional CTC Networks for Automatic Speech Recognition

no code implementations24 Feb 2017 Yisen Wang, Xuejiao Deng, Songbai Pu, Zhiheng Huang

Furthermore, we introduce a CTC-based system combination, which is different from the conventional frame-wise senone-based one.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Unifying Decision Trees Split Criteria Using Tsallis Entropy

no code implementations25 Nov 2015 Yisen Wang, Chaobing Song, Shu-Tao Xia

In this paper, a Tsallis Entropy Criterion (TEC) algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index, which generalizes the split criteria of decision trees.

Cannot find the paper you are looking for? You can Submit a new open access paper.