Search Results for author: Xi Wu

Found 56 papers, 19 papers with code

Dcl-Net: Dual Contrastive Learning Network for Semi-Supervised Multi-Organ Segmentation

no code implementations6 Mar 2024 Lu Wen, Zhenghao Feng, Yun Hou, Peng Wang, Xi Wu, Jiliu Zhou, Yan Wang

Semi-supervised learning is a sound measure to relieve the strict demand of abundant annotated datasets, especially for challenging multi-organ segmentation .

Contrastive Learning Organ Segmentation

Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET Image Reconstruction

1 code implementation1 Feb 2024 Jiaqi Cui, Yan Wang, Lu Wen, Pinxian Zeng, Xi Wu, Jiliu Zhou, Dinggang Shen

To obtain high-quality Positron emission tomography (PET) images while minimizing radiation exposure, numerous methods have been proposed to reconstruct standard-dose PET (SPET) images from the corresponding low-dose PET (LPET) images.

Image Reconstruction

Masked Conditional Diffusion Model for Enhancing Deepfake Detection

no code implementations1 Feb 2024 Tiewen Chen, Shanmin Yang, Shu Hu, Zhenghan Fang, Ying Fu, Xi Wu, Xin Wang

this paper present we put a new insight into diffusion model-based data augmentation, and propose a Masked Conditional Diffusion Model (MCDM) for enhancing deepfake detection.

Data Augmentation DeepFake Detection +1

Uncertainty-Aware Explainable Recommendation with Large Language Models

no code implementations31 Jan 2024 Yicui Peng, Hao Chen, ChingSheng Lin, Guo Huang, Jinrong Hu, Hui Guo, Bin Kong, Shu Hu, Xi Wu, Xin Wang

Providing explanations within the recommendation system would boost user satisfaction and foster trust, especially by elaborating on the reasons for selecting recommended items tailored to the user.

Explainable Recommendation Multi-Task Learning

Efficient Image Super-Resolution via Symmetric Visual Attention Network

no code implementations17 Jan 2024 Chengxu Wu, Qinrui Fan, Shu Hu, Xi Wu, Xin Wang, Jing Hu

An important development direction in the Single-Image Super-Resolution (SISR) algorithms is to improve the efficiency of the algorithms.

Image Super-Resolution

UMedNeRF: Uncertainty-aware Single View Volumetric Rendering for Medical Neural Radiance Fields

no code implementations10 Nov 2023 Jing Hu, Qinrui Fan, Shu Hu, Siwei Lyu, Xi Wu, Xin Wang

In the field of clinical medicine, computed tomography (CT) is an effective medical imaging modality for the diagnosis of various pathologies.

Computed Tomography (CT)

Diffusion-based Radiotherapy Dose Prediction Guided by Inter-slice Aware Structure Encoding

no code implementations6 Nov 2023 Zhenghao Feng, Lu Wen, Jianghong Xiao, Yuanyuan Xu, Xi Wu, Jiliu Zhou, Xingchen Peng, Yan Wang

In the forward process, DiffDose transforms dose distribution maps into pure Gaussian noise by gradually adding small noise and a noise predictor is simultaneously trained to estimate the noise added at each timestep.

Controlling Neural Style Transfer with Deep Reinforcement Learning

no code implementations30 Sep 2023 Chengming Feng, Jing Hu, Xin Wang, Shu Hu, Bin Zhu, Xi Wu, Hongtu Zhu, Siwei Lyu

Controlling the degree of stylization in the Neural Style Transfer (NST) is a little tricky since it usually needs hand-engineering on hyper-parameters.

reinforcement-learning Reinforcement Learning (RL) +1

Weakly Supervised Semantic Segmentation by Knowledge Graph Inference

1 code implementation25 Sep 2023 Jia Zhang, Bo Peng, Xi Wu

Extensive experimentation on both the multi-label classification and segmentation network stages underscores the effectiveness of the proposed graph reasoning approach for advancing WSSS.

Classification Multi-Label Classification +3

Image-to-Image Translation with Deep Reinforcement Learning

1 code implementation24 Sep 2023 Xin Wang, Ziwei Luo, Jing Hu, Chengming Feng, Shu Hu, Bin Zhu, Xi Wu, Xin Li, Siwei Lyu

The key feature in the RL-I2IT framework is to decompose a monolithic learning process into small steps with a lightweight model to progressively transform a source image successively to a target image.

Auxiliary Learning Decision Making +3

Rethinking Superpixel Segmentation from Biologically Inspired Mechanisms

no code implementations23 Sep 2023 TingYu Zhao, Bo Peng, Yuan Sun, DaiPeng Yang, Zhenguang Zhang, Xi Wu

Recently, advancements in deep learning-based superpixel segmentation methods have brought about improvements in both the efficiency and the performance of segmentation.

Segmentation Superpixels

Edge-aware Hard Clustering Graph Pooling for Brain Imaging

1 code implementation23 Aug 2023 Cheng Zhu, JiaYi Zhu, Xi Wu, Lijuan Zhang, Shuqi Yang, Ping Liang, Honghan Chen, Ying Tan

In this paper, we propose a novel edge-aware hard clustering graph pool (EHCPool), which is tailored to dominant edge features and redefines the clustering process.

Clustering Graph Clustering +1

TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction from Low-Dose Sinograms

no code implementations10 Aug 2023 Jiaqi Cui, Pinxian Zeng, Xinyi Zeng, Peng Wang, Xi Wu, Jiliu Zhou, Yan Wang, Dinggang Shen

Specifically, the TriDo-Former consists of two cascaded networks, i. e., a sinogram enhancement transformer (SE-Former) for denoising the input LPET sinograms and a spatial-spectral reconstruction transformer (SSR-Former) for reconstructing SPET images from the denoised sinograms.

Denoising Image Reconstruction +1

DiffDP: Radiotherapy Dose Prediction via a Diffusion Model

no code implementations19 Jul 2023 Zhenghao Feng, Lu Wen, Peng Wang, Binyu Yan, Xi Wu, Jiliu Zhou, Yan Wang

To alleviate this limitation, we innovatively introduce a diffusion-based dose prediction (DiffDP) model for predicting the radiotherapy dose distribution of cancer patients.

Anatomy

Dimension Independent Mixup for Hard Negative Sample in Collaborative Filtering

1 code implementation28 Jun 2023 Xi Wu, Liangwei Yang, Jibing Gong, Chao Zhou, Tianyu Lin, Xiaolong Liu, Philip S. Yu

To address this limitation, we propose Dimension Independent Mixup for Hard Negative Sampling (DINS), which is the first Area-wise sampling method for training CF-based models.

Collaborative Filtering

Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection

no code implementations27 May 2023 Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, YIngyu Liang, Somesh Jha

Nevertheless, under recent strong adversarial attacks (GMSA, which has been shown to be much more effective than AutoAttack against transduction), Goldwasser et al.'s work was shown to have low performance in a practical deep-learning setting.

Adversarial Robustness

Stratified Adversarial Robustness with Rejection

1 code implementation2 May 2023 Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, YIngyu Liang, Somesh Jha

We theoretically analyze the stratified rejection setting and propose a novel defense method -- Adversarial Training with Consistent Prediction-based Rejection (CPR) -- for building a robust selective classifier.

Adversarial Robustness Robust classification

Harnessing the Power of Text-image Contrastive Models for Automatic Detection of Online Misinformation

no code implementations19 Apr 2023 Hao Chen, Peng Zheng, Xin Wang, Shu Hu, Bin Zhu, Jinrong Hu, Xi Wu, Siwei Lyu

As growing usage of social media websites in the recent decades, the amount of news articles spreading online rapidly, resulting in an unprecedented scale of potentially fraudulent information.

Contrastive Learning Misinformation +1

The Trade-off between Universality and Label Efficiency of Representations from Contrastive Learning

1 code implementation28 Feb 2023 Zhenmei Shi, Jiefeng Chen, Kunyang Li, Jayaram Raghuram, Xi Wu, YIngyu Liang, Somesh Jha

foundation models) has recently become a prevalent learning paradigm, where one first pre-trains a representation using large-scale unlabeled data, and then learns simple predictors on top of the representation using small labeled data from the downstream tasks.

Contrastive Learning

Attacking Important Pixels for Anchor-free Detectors

no code implementations26 Jan 2023 Yunxu Xie, Shu Hu, Xin Wang, Quanyu Liao, Bin Zhu, Xi Wu, Siwei Lyu

Existing adversarial attacks on object detection focus on attacking anchor-based detectors, which may not work well for anchor-free detectors.

Adversarial Attack object-detection +2

Stochastic Actor-Executor-Critic for Image-to-Image Translation

1 code implementation14 Dec 2021 Ziwei Luo, Jing Hu, Xin Wang, Siwei Lyu, Bin Kong, Youbing Yin, Qi Song, Xi Wu

Training a model-free deep reinforcement learning model to solve image-to-image translation is difficult since it involves high-dimensional continuous state and action spaces.

Continuous Control Image-to-Image Translation +3

Revisiting Adversarial Robustness of Classifiers With a Reject Option

no code implementations AAAI Workshop AdvML 2022 Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, YIngyu Liang, Somesh Jha

Motivated by this metric, we propose novel loss functions and a robust training method -- \textit{stratified adversarial training with rejection} (SATR) -- for a classifier with reject option, where the goal is to accept and correctly-classify small input perturbations, while allowing the rejection of larger input perturbations that cannot be correctly classified.

Adversarial Robustness Image Classification

Towards Evaluating the Robustness of Neural Networks Learned by Transduction

1 code implementation ICLR 2022 Jiefeng Chen, Xi Wu, Yang Guo, YIngyu Liang, Somesh Jha

There has been emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021).

Adversarial Robustness Bilevel Optimization +1

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles

1 code implementation NeurIPS 2021 Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, YIngyu Liang, Somesh Jha

This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs.

Towards Adversarial Robustness via Transductive Learning

no code implementations15 Jun 2021 Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha

Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.

Adversarial Robustness Bilevel Optimization +1

Transferable Adversarial Examples for Anchor Free Object Detection

no code implementations3 Jun 2021 Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Bin Zhu, Youbing Yin, Qi Song, Xi Wu

Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change prediction result.

Adversarial Attack Object +2

Imperceptible Adversarial Examples for Fake Image Detection

no code implementations3 Jun 2021 Quanyu Liao, Yuezun Li, Xin Wang, Bin Kong, Bin Zhu, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu

Fooling people with highly realistic fake images generated with Deepfake or GANs brings a great social disturbance to our society.

Face Swapping Fake Image Detection

Deep Learning based Multi-modal Computing with Feature Disentanglement for MRI Image Synthesis

no code implementations6 May 2021 Yuchen Fei, Bo Zhan, Mei Hong, Xi Wu, Jiliu Zhou, Yan Wang

To take full advantage of the complementary information provided by different modalities, multi-modal MRI sequences are utilized as input.

Disentanglement Image Generation

Racetrack microresonator based electro-optic phase shifters on a 3C-silicon-carbide-on-insulator platform

no code implementations11 Feb 2021 Tianren Fan, Xi Wu, Sai R. M. Vangapandu, Amir H. Hosseinnia, Ali A. Eftekhar, Ali Adibi

We report the first demonstration of integrated electro-optic (EO) phase shifters based on racetrack microresonators on a 3C-silicon-carbide-on-insulator (SiCOI) platform working at near-infrared (NIR) wavelengths.

Optics Applied Physics

Multilayer Haldane model

no code implementations4 Jan 2021 Xi Wu, C. X. Zhang, M. A. Zubkov

We propose the model of layered materials, in which each layer is described by the conventional Haldane model, while the inter - layer hopping parameter corresponds to the ABC stacking.

Mesoscale and Nanoscale Physics

Test-Time Adaptation and Adversarial Robustness

no code implementations1 Jan 2021 Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha

On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.

Adversarial Robustness Test-time Adaptation +1

ASMFS: Adaptive-Similarity-based Multi-modality Feature Selection for Classification of Alzheimer's Disease

no code implementations16 Oct 2020 Yuang Shi, Chen Zu, Mei Hong, Luping Zhou, Lei Wang, Xi Wu, Jiliu Zhou, Daoqiang Zhang, Yan Wang

With the increasing amounts of high-dimensional heterogeneous data to be processed, multi-modality feature selection has become an important research direction in medical image analysis.

feature selection General Classification

Informative Outlier Matters: Robustifying Out-of-distribution Detection Using Outlier Mining

no code implementations28 Sep 2020 Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha

We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining

1 code implementation26 Jun 2020 Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha

We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Representation Bayesian Risk Decompositions and Multi-Source Domain Adaptation

no code implementations22 Apr 2020 Xi Wu, Yang Guo, Jiefeng Chen, YIngyu Liang, Somesh Jha, Prasad Chalasani

Recent studies provide hints and failure examples for domain invariant representation learning, a common approach for this problem, but the explanations provided are somewhat different and do not provide a unified picture.

Domain Adaptation Representation Learning

Robust Out-of-distribution Detection for Neural Networks

1 code implementation AAAI Workshop AdvML 2022 Jiefeng Chen, Yixuan Li, Xi Wu, YIngyu Liang, Somesh Jha

Formally, we extensively study the problem of Robust Out-of-Distribution Detection on common OOD detection approaches, and show that state-of-the-art OOD detectors can be easily fooled by adding small perturbations to the in-distribution and OOD inputs.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection

no code implementations10 Feb 2020 Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Youbing Yin, Qi Song, Xi Wu

Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbations can completely change the classification results.

Object object-detection +1

Discovery of Bias and Strategic Behavior in Crowdsourced Performance Assessment

no code implementations5 Aug 2019 Yifei Huang, Matt Shum, Xi Wu, Jason Zezhong Xiao

With the industry trend of shifting from a traditional hierarchical approach to flatter management structure, crowdsourced performance assessment gained mainstream popularity.

Fairness Management

Rearchitecting Classification Frameworks For Increased Robustness

no code implementations26 May 2019 Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu

and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off?

Autonomous Driving Classification +2

Robust Attribution Regularization

1 code implementation NeurIPS 2019 Jiefeng Chen, Xi Wu, Vaibhav Rastogi, YIngyu Liang, Somesh Jha

An emerging problem in trustworthy machine learning is to train models that produce robust interpretations for their predictions.

Concise Explanations of Neural Networks using Adversarial Training

1 code implementation ICML 2020 Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Somesh Jha, Xi Wu

Our first contribution is a theoretical exploration of how these two properties (when using attributions based on Integrated Gradients, or IG) are related to adversarial training, for a class of 1-layer networks (which includes logistic regression models for binary and multi-class classification); for these networks we show that (a) adversarial training using an $\ell_\infty$-bounded adversary produces models with sparse attribution vectors, and (b) natural model-training while encouraging stable explanations (via an extra term in the loss function), is equivalent to adversarial training.

Multi-class Classification

Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks

1 code implementation20 May 2018 Jiefeng Chen, Xi Wu, Vaibhav Rastogi, YIngyu Liang, Somesh Jha

We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets.

The Manifold Assumption and Defenses Against Adversarial Perturbations

no code implementations ICLR 2018 Xi Wu, Uyeong Jang, Lingjiao Chen, Somesh Jha

Interestingly, we find that a recent objective by Madry et al. encourages training a model that satisfies well our formal version of the goodness property, but has a weak control of points that are wrong but with low confidence.

Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training

no code implementations ICML 2018 Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha

In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model.

Adversarial Robustness

Tuple-oriented Compression for Large-scale Mini-batch Stochastic Gradient Descent

no code implementations22 Feb 2017 Fengan Li, Lingjiao Chen, Yijing Zeng, Arun Kumar, Jeffrey F. Naughton, Jignesh M. Patel, Xi Wu

We fill this crucial research gap by proposing a new lossless compression scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely source, the string/text compression scheme Lempel-Ziv-Welch, but tailored to MGD in a way that preserves tuple boundaries within mini-batches.

Data Compression Open-Ended Question Answering +1

Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics

1 code implementation15 Jun 2016 Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey F. Naughton

This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address \emph{both} issues in an integrated manner.

Revisiting Differentially Private Regression: Lessons From Learning Theory and their Consequences

no code implementations20 Dec 2015 Xi Wu, Matthew Fredrikson, Wentao Wu, Somesh Jha, Jeffrey F. Naughton

Perhaps more importantly, our theory reveals that the most basic mechanism in differential privacy, output perturbation, can be used to obtain a better tradeoff for all convex-Lipschitz-bounded learning tasks.

Learning Theory regression

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

2 code implementations14 Nov 2015 Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami

In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs.

Autonomous Vehicles BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.