Search Results for author: Jian-Yu Wang

Found 14 papers, 8 papers with code

Adversarial Examples for Semantic Segmentation and Object Detection

2 code implementations ICCV 2017 Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, Alan Yuille

Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e. g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels/proposals for generating adversarial perturbations.

Adversarial Attack Object +4

DeepVoting: A Robust and Explainable Deep Network for Semantic Part Detection under Partial Occlusion

no code implementations CVPR 2018 Zhishuai Zhang, Cihang Xie, Jian-Yu Wang, Lingxi Xie, Alan L. Yuille

The first layer extracts the evidence of local visual cues, and the second layer performs a voting mechanism by utilizing the spatial relationship between visual cues and semantic parts.

Semantic Part Detection

Improving Transferability of Adversarial Examples with Input Diversity

2 code implementations CVPR 2019 Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jian-Yu Wang, Zhou Ren, Alan Yuille

We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future.

Adversarial Attack Image Classification

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

BIG-bench Machine Learning

Zero-Shot Transfer VQA Dataset

no code implementations2 Nov 2018 Yuanpeng Li, Yi Yang, Jian-Yu Wang, Wei Xu

Therefore, toaccelerate this research, we propose a newzero-shot transfer VQA(ZST-VQA)dataset by reorganizing the existing VQA v1. 0 dataset in the way that duringtraining, some words appear only in one module (i. e. questions) but not in theother (i. e. answers).

Question Answering Transfer Learning +1

Towards Adversarially Robust Object Detection

no code implementations ICCV 2019 Haichao Zhang, Jian-Yu Wang

Object detection is an important vision task and has emerged as an indispensable component in many vision system, rendering its robustness as an increasingly important performance factor for practical applications.

Multi-Task Learning Object +2

Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks

no code implementations24 Jul 2019 Haichao Zhang, Jian-Yu Wang

In this paper, we propose a joint adversarial training method that incorporates both spatial transformation-based and pixel-value based attacks for improving model robustness.

Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training

3 code implementations NeurIPS 2019 Haichao Zhang, Jian-Yu Wang

We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks.

Compositional Generalization for Primitive Substitutions

1 code implementation IJCNLP 2019 Yuanpeng Li, Liang Zhao, Jian-Yu Wang, Joel Hestness

Compositional generalization is a basic mechanism in human language learning, but current neural networks lack such ability.

Few-Shot Learning Machine Translation +2

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

2 code implementations CVPR 2020 Yantao Lu, Yunhan Jia, Jian-Yu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they remain adversarial even against other models.

Adversarial Attack Image Classification +5

Machine Learning on Volatile Instances

no code implementations12 Mar 2020 Xiaoxi Zhang, Jian-Yu Wang, Gauri Joshi, Carlee Joe-Wong

Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes.

BIG-bench Machine Learning

Band-limited Soft Actor Critic Model

1 code implementation19 Jun 2020 Miguel Campo, Zhengxing Chen, Luke Kung, Kittipat Virochsiri, Jian-Yu Wang

Soft Actor Critic (SAC) algorithms show remarkable performance in complex simulated environments.

Cannot find the paper you are looking for? You can Submit a new open access paper.