Search Results for author: Hongge Chen

Found 17 papers, 11 papers with code

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Caption Generation Image Captioning

Towards Fast Computation of Certified Robustness for ReLU Networks

6 code implementations ICML 2018 Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

2 code implementations ECCV 2018 Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.

General Classification Image Classification

The Limitations of Adversarial Training and the Blind-Spot Attack

no code implementations ICLR 2019 Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh

In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.

valid

Robust Decision Trees Against Adversarial Examples

3 code implementations27 Feb 2019 Hongge Chen, huan zhang, Duane Boning, Cho-Jui Hsieh

Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.

Adversarial Attack Adversarial Defense

Robustness Verification of Tree-based Models

2 code implementations NeurIPS 2019 Hongge Chen, huan zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh

We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity.

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

1 code implementation10 Jun 2019 Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.

Adversarial Robustness Classification +2

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Adversarial T-shirt! Evading Person Detectors in A Physical World

1 code implementation ECCV 2020 Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, Xue Lin

To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts.

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

4 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

reinforcement-learning Reinforcement Learning (RL)

Multi-Stage Influence Function

no code implementations NeurIPS 2020 Hongge Chen, Si Si, Yang Li, Ciprian Chelba, Sanjiv Kumar, Duane Boning, Cho-Jui Hsieh

With this score, we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task.

Transfer Learning

On $\ell_p$-norm Robustness of Ensemble Stumps and Trees

1 code implementation20 Aug 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.

Robust Reinforcement Learning on State Observations with Learned Optimal Adversary

2 code implementations ICLR 2021 huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise.

Adversarial Attack Continuous Control +2

SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors

no code implementations ICCV 2023 Hongge Chen, Zhao Chen, Gregory P. Meyer, Dennis Park, Carl Vondrick, Ashish Shrivastava, Yuning Chai

We present SHIFT3D, a differentiable pipeline for generating 3D shapes that are structurally plausible yet challenging to 3D object detectors.

Autonomous Driving Object

On Lp-norm Robustness of Ensemble Decision Stumps and Trees

no code implementations ICML 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.

Cannot find the paper you are looking for? You can Submit a new open access paper.