Search Results for author: Xi Wu

Found 32 papers, 10 papers with code

Stochastic Actor-Executor-Critic for Image-to-Image Translation

no code implementations14 Dec 2021 Ziwei Luo, Jing Hu, Xin Wang, Siwei Lyu, Bin Kong, Youbing Yin, Qi Song, Xi Wu

Training a model-free deep reinforcement learning model to solve image-to-image translation is difficult since it involves high-dimensional continuous state and action spaces.

Continuous Control Image-to-Image Translation +2

Revisiting Adversarial Robustness of Classifiers With a Reject Option

no code implementations AAAI Workshop AdvML 2022 Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, YIngyu Liang, Somesh Jha

Motivated by this metric, we propose novel loss functions and a robust training method -- \textit{stratified adversarial training with rejection} (SATR) -- for a classifier with reject option, where the goal is to accept and correctly-classify small input perturbations, while allowing the rejection of larger input perturbations that cannot be correctly classified.

Adversarial Robustness Image Classification

Towards Evaluating the Robustness of Neural Networks Learned by Transduction

1 code implementation ICLR 2022 Jiefeng Chen, Xi Wu, Yang Guo, YIngyu Liang, Somesh Jha

There has been emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021).

Adversarial Robustness Bilevel Optimization

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles

1 code implementation NeurIPS 2021 Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, YIngyu Liang, Somesh Jha

This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs.

Towards Adversarial Robustness via Transductive Learning

no code implementations15 Jun 2021 Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha

Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.

Adversarial Robustness Bilevel Optimization

Imperceptible Adversarial Examples for Fake Image Detection

no code implementations3 Jun 2021 Quanyu Liao, Yuezun Li, Xin Wang, Bin Kong, Bin Zhu, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu

Fooling people with highly realistic fake images generated with Deepfake or GANs brings a great social disturbance to our society.

Face Swapping Fake Image Detection

Transferable Adversarial Examples for Anchor Free Object Detection

no code implementations3 Jun 2021 Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Bin Zhu, Youbing Yin, Qi Song, Xi Wu

Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change prediction result.

Adversarial Attack object-detection +1

Deep Learning based Multi-modal Computing with Feature Disentanglement for MRI Image Synthesis

no code implementations6 May 2021 Yuchen Fei, Bo Zhan, Mei Hong, Xi Wu, Jiliu Zhou, Yan Wang

To take full advantage of the complementary information provided by different modalities, multi-modal MRI sequences are utilized as input.

Disentanglement Image Generation

Racetrack microresonator based electro-optic phase shifters on a 3C-silicon-carbide-on-insulator platform

no code implementations11 Feb 2021 Tianren Fan, Xi Wu, Sai R. M. Vangapandu, Amir H. Hosseinnia, Ali A. Eftekhar, Ali Adibi

We report the first demonstration of integrated electro-optic (EO) phase shifters based on racetrack microresonators on a 3C-silicon-carbide-on-insulator (SiCOI) platform working at near-infrared (NIR) wavelengths.

Optics Applied Physics

Multilayer Haldane model

no code implementations4 Jan 2021 Xi Wu, C. X. Zhang, M. A. Zubkov

We propose the model of layered materials, in which each layer is described by the conventional Haldane model, while the inter - layer hopping parameter corresponds to the ABC stacking.

Mesoscale and Nanoscale Physics

Test-Time Adaptation and Adversarial Robustness

no code implementations1 Jan 2021 Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha

On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.

Adversarial Robustness Unsupervised Domain Adaptation

ASMFS: Adaptive-Similarity-based Multi-modality Feature Selection for Classification of Alzheimer's Disease

no code implementations16 Oct 2020 Yuang Shi, Chen Zu, Mei Hong, Luping Zhou, Lei Wang, Xi Wu, Jiliu Zhou, Daoqiang Zhang, Yan Wang

With the increasing amounts of high-dimensional heterogeneous data to be processed, multi-modality feature selection has become an important research direction in medical image analysis.

feature selection General Classification

Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining

no code implementations28 Sep 2020 Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha

We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.

OOD Detection Out-of-Distribution Detection

ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining

1 code implementation26 Jun 2020 Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha

We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.

OOD Detection Out-of-Distribution Detection

Representation Bayesian Risk Decompositions and Multi-Source Domain Adaptation

no code implementations22 Apr 2020 Xi Wu, Yang Guo, Jiefeng Chen, YIngyu Liang, Somesh Jha, Prasad Chalasani

Recent studies provide hints and failure examples for domain invariant representation learning, a common approach for this problem, but the explanations provided are somewhat different and do not provide a unified picture.

Domain Adaptation Representation Learning

Robust Out-of-distribution Detection for Neural Networks

1 code implementation AAAI Workshop AdvML 2022 Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha

Formally, we extensively study the problem of Robust Out-of-Distribution Detection on common OOD detection approaches, and show that state-of-the-art OOD detectors can be easily fooled by adding small perturbations to the in-distribution and OOD inputs.

OOD Detection Out-of-Distribution Detection

Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection

no code implementations10 Feb 2020 Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu

Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbations can completely change the classification results.

object-detection Object Detection

Discovery of Bias and Strategic Behavior in Crowdsourced Performance Assessment

no code implementations5 Aug 2019 Yifei Huang, Matt Shum, Xi Wu, Jason Zezhong Xiao

With the industry trend of shifting from a traditional hierarchical approach to flatter management structure, crowdsourced performance assessment gained mainstream popularity.

Fairness

Rearchitecting Classification Frameworks For Increased Robustness

no code implementations26 May 2019 Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu

and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off?

Autonomous Driving Classification +1

Robust Attribution Regularization

1 code implementation NeurIPS 2019 Jiefeng Chen, Xi Wu, Vaibhav Rastogi, YIngyu Liang, Somesh Jha

An emerging problem in trustworthy machine learning is to train models that produce robust interpretations for their predictions.

Concise Explanations of Neural Networks using Adversarial Training

1 code implementation ICML 2020 Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Somesh Jha, Xi Wu

Our first contribution is a theoretical exploration of how these two properties (when using attributions based on Integrated Gradients, or IG) are related to adversarial training, for a class of 1-layer networks (which includes logistic regression models for binary and multi-class classification); for these networks we show that (a) adversarial training using an $\ell_\infty$-bounded adversary produces models with sparse attribution vectors, and (b) natural model-training while encouraging stable explanations (via an extra term in the loss function), is equivalent to adversarial training.

Multi-class Classification

Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks

1 code implementation20 May 2018 Jiefeng Chen, Xi Wu, Vaibhav Rastogi, YIngyu Liang, Somesh Jha

We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets.

The Manifold Assumption and Defenses Against Adversarial Perturbations

no code implementations ICLR 2018 Xi Wu, Uyeong Jang, Lingjiao Chen, Somesh Jha

Interestingly, we find that a recent objective by Madry et al. encourages training a model that satisfies well our formal version of the goodness property, but has a weak control of points that are wrong but with low confidence.

Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training

no code implementations ICML 2018 Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha

In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model.

Adversarial Robustness

Tuple-oriented Compression for Large-scale Mini-batch Stochastic Gradient Descent

no code implementations22 Feb 2017 Fengan Li, Lingjiao Chen, Yijing Zeng, Arun Kumar, Jeffrey F. Naughton, Jignesh M. Patel, Xi Wu

We fill this crucial research gap by proposing a new lossless compression scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely source, the string/text compression scheme Lempel-Ziv-Welch, but tailored to MGD in a way that preserves tuple boundaries within mini-batches.

Data Compression Text Compression

Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics

1 code implementation15 Jun 2016 Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey F. Naughton

This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address \emph{both} issues in an integrated manner.

Revisiting Differentially Private Regression: Lessons From Learning Theory and their Consequences

no code implementations20 Dec 2015 Xi Wu, Matthew Fredrikson, Wentao Wu, Somesh Jha, Jeffrey F. Naughton

Perhaps more importantly, our theory reveals that the most basic mechanism in differential privacy, output perturbation, can be used to obtain a better tradeoff for all convex-Lipschitz-bounded learning tasks.

Learning Theory

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

1 code implementation14 Nov 2015 Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami

In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs.

Autonomous Vehicles

Cannot find the paper you are looking for? You can Submit a new open access paper.