no code implementations • 14 Dec 2021 • Ziwei Luo, Jing Hu, Xin Wang, Siwei Lyu, Bin Kong, Youbing Yin, Qi Song, Xi Wu
Training a model-free deep reinforcement learning model to solve image-to-image translation is difficult since it involves high-dimensional continuous state and action spaces.
1 code implementation • 14 Dec 2021 • Ziwei Luo, Jing Hu, Xin Wang, Shu Hu, Bin Kong, Youbing Yin, Qi Song, Xi Wu, Siwei Lyu
We evaluate our method on several 2D and 3D medical image datasets, some of which contain large deformations.
no code implementations • AAAI Workshop AdvML 2022 • Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, YIngyu Liang, Somesh Jha
Motivated by this metric, we propose novel loss functions and a robust training method -- \textit{stratified adversarial training with rejection} (SATR) -- for a classifier with reject option, where the goal is to accept and correctly-classify small input perturbations, while allowing the rejection of larger input perturbations that cannot be correctly classified.
1 code implementation • ICLR 2022 • Jiefeng Chen, Xi Wu, Yang Guo, YIngyu Liang, Somesh Jha
There has been emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021).
1 code implementation • NeurIPS 2021 • Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, YIngyu Liang, Somesh Jha
This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs.
no code implementations • 15 Jun 2021 • Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha
Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.
no code implementations • 3 Jun 2021 • Quanyu Liao, Yuezun Li, Xin Wang, Bin Kong, Bin Zhu, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu
Fooling people with highly realistic fake images generated with Deepfake or GANs brings a great social disturbance to our society.
no code implementations • 3 Jun 2021 • Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Bin Zhu, Youbing Yin, Qi Song, Xi Wu
Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change prediction result.
no code implementations • 6 May 2021 • Yuchen Fei, Bo Zhan, Mei Hong, Xi Wu, Jiliu Zhou, Yan Wang
To take full advantage of the complementary information provided by different modalities, multi-modal MRI sequences are utilized as input.
no code implementations • 11 Feb 2021 • Tianren Fan, Xi Wu, Sai R. M. Vangapandu, Amir H. Hosseinnia, Ali A. Eftekhar, Ali Adibi
We report the first demonstration of integrated electro-optic (EO) phase shifters based on racetrack microresonators on a 3C-silicon-carbide-on-insulator (SiCOI) platform working at near-infrared (NIR) wavelengths.
Optics Applied Physics
no code implementations • 4 Jan 2021 • Xi Wu, C. X. Zhang, M. A. Zubkov
We propose the model of layered materials, in which each layer is described by the conventional Haldane model, while the inter - layer hopping parameter corresponds to the ABC stacking.
Mesoscale and Nanoscale Physics
no code implementations • 1 Jan 2021 • Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha
On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.
no code implementations • 27 Oct 2020 • Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu
The deep neural network is vulnerable to adversarial examples.
no code implementations • 16 Oct 2020 • Yuang Shi, Chen Zu, Mei Hong, Luping Zhou, Lei Wang, Xi Wu, Jiliu Zhou, Daoqiang Zhang, Yan Wang
With the increasing amounts of high-dimensional heterogeneous data to be processed, multi-modality feature selection has become an important research direction in medical image analysis.
no code implementations • 28 Sep 2020 • Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha
We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.
1 code implementation • 26 Jun 2020 • Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha
We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.
no code implementations • 11 Jun 2020 • Pin Tang, Chen Zu, Mei Hong, Rui Yan, Xingchen Peng, Jianghong Xiao, Xi Wu, Jiliu Zhou, Luping Zhou, Yan Wang
In this paper, we propose a Dense SegU-net (DSU-net) framework for automatic NPC segmentation in MRI.
no code implementations • 22 Apr 2020 • Xi Wu, Yang Guo, Jiefeng Chen, YIngyu Liang, Somesh Jha, Prasad Chalasani
Recent studies provide hints and failure examples for domain invariant representation learning, a common approach for this problem, but the explanations provided are somewhat different and do not provide a unified picture.
1 code implementation • AAAI Workshop AdvML 2022 • Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha
Formally, we extensively study the problem of Robust Out-of-Distribution Detection on common OOD detection approaches, and show that state-of-the-art OOD detectors can be easily fooled by adding small perturbations to the in-distribution and OOD inputs.
no code implementations • 10 Feb 2020 • Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu
Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbations can completely change the classification results.
no code implementations • 29 Jan 2020 • Shanhui Sun, Jing Hu, Mingqing Yao, Jinrong Hu, Xiaodong Yang, Qi Song, Xi Wu
To this end, these two components are tackled in an end-to-end manner via reinforcement learning in this work.
no code implementations • 5 Aug 2019 • Yifei Huang, Matt Shum, Xi Wu, Jason Zezhong Xiao
With the industry trend of shifting from a traditional hierarchical approach to flatter management structure, crowdsourced performance assessment gained mainstream popularity.
no code implementations • 26 May 2019 • Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off?
1 code implementation • NeurIPS 2019 • Jiefeng Chen, Xi Wu, Vaibhav Rastogi, YIngyu Liang, Somesh Jha
An emerging problem in trustworthy machine learning is to train models that produce robust interpretations for their predictions.
1 code implementation • ICML 2020 • Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Somesh Jha, Xi Wu
Our first contribution is a theoretical exploration of how these two properties (when using attributions based on Integrated Gradients, or IG) are related to adversarial training, for a class of 1-layer networks (which includes logistic regression models for binary and multi-class classification); for these networks we show that (a) adversarial training using an $\ell_\infty$-bounded adversary produces models with sparse attribution vectors, and (b) natural model-training while encouraging stable explanations (via an extra term in the loss function), is equivalent to adversarial training.
1 code implementation • 20 May 2018 • Jiefeng Chen, Xi Wu, Vaibhav Rastogi, YIngyu Liang, Somesh Jha
We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets.
no code implementations • ICLR 2018 • Xi Wu, Uyeong Jang, Lingjiao Chen, Somesh Jha
Interestingly, we find that a recent objective by Madry et al. encourages training a model that satisfies well our formal version of the goodness property, but has a weak control of points that are wrong but with low confidence.
no code implementations • ICML 2018 • Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha
In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model.
no code implementations • 22 Feb 2017 • Fengan Li, Lingjiao Chen, Yijing Zeng, Arun Kumar, Jeffrey F. Naughton, Jignesh M. Patel, Xi Wu
We fill this crucial research gap by proposing a new lossless compression scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely source, the string/text compression scheme Lempel-Ziv-Welch, but tailored to MGD in a way that preserves tuple boundaries within mini-batches.
1 code implementation • 15 Jun 2016 • Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey F. Naughton
This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address \emph{both} issues in an integrated manner.
no code implementations • 20 Dec 2015 • Xi Wu, Matthew Fredrikson, Wentao Wu, Somesh Jha, Jeffrey F. Naughton
Perhaps more importantly, our theory reveals that the most basic mechanism in differential privacy, output perturbation, can be used to obtain a better tradeoff for all convex-Lipschitz-bounded learning tasks.
1 code implementation • 14 Nov 2015 • Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami
In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs.