Search Results for author: Wenyu Jiang

Found 10 papers, 3 papers with code

Efficient Membership Inference Attacks by Bayesian Neural Network

no code implementations10 Mar 2025 Zhenlong Liu, Wenyu Jiang, Feng Zhou, Hongxin Wei

Membership Inference Attacks (MIAs) aim to estimate whether a specific data point was used in the training of a given model.

Bayesian Inference Inference Attack +2

On the Noise Robustness of In-Context Learning for Text Generation

1 code implementation27 May 2024 Hongfu Gao, Feipeng Zhang, Wenyu Jiang, Jun Shu, Feng Zheng, Hongxin Wei

In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning.

In-Context Learning text-classification +2

Similarity-Navigated Conformal Prediction for Graph Neural Networks

1 code implementation23 May 2024 Jianqing Song, Jianguo Huang, Wenyu Jiang, Baoming Zhang, Shuangjie Li, Chongjun Wang

In this paper, we empirically show that for each node, aggregating the non-conformity scores of nodes with the same label can improve the efficiency of conformal prediction sets while maintaining valid marginal coverage.

Conformal Prediction Node Classification +2

Exploring Learning Complexity for Efficient Downstream Dataset Pruning

no code implementations8 Feb 2024 Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei

In this paper, we propose a straightforward, novel, and training-free hardness score named Distorting-based Learning Complexity (DLC), to identify informative images and instructions from the downstream dataset efficiently.

Informativeness

DOS: Diverse Outlier Sampling for Out-of-Distribution Detection

2 code implementations3 Jun 2023 Wenyu Jiang, Hao Cheng, Mingcai Chen, Chongjun Wang, Hongxin Wei

Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world.

Diversity Out-of-Distribution Detection

MixBoost: Improving the Robustness of Deep Neural Networks by Boosting Data Augmentation

no code implementations8 Dec 2022 Zhendong Liu, Wenyu Jiang, Min Guo, Chongjun Wang

Based on the analysis of the internal mechanisms, we develop a mask-based boosting method for data augmentation that comprehensively improves several robustness measures of AI models and beats state-of-the-art data augmentation approaches.

Data Augmentation Explainable artificial intelligence +1

Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models

no code implementations22 Jun 2022 Liu Zhendong, Wenyu Jiang, Yi Zhang, Chongjun Wang

With the rapid development of eXplainable Artificial Intelligence (XAI), a long line of past work has shown concerns about the Out-of-Distribution (OOD) problem in perturbation-based post-hoc XAI models and explanations are socially misaligned.

counterfactual Explainable artificial intelligence +2

READ: Aggregating Reconstruction Error into Out-of-distribution Detection

no code implementations15 Jun 2022 Wenyu Jiang, Yuxin Ge, Hao Cheng, Mingcai Chen, Shuai Feng, Chongjun Wang

We propose a novel method, READ (Reconstruction Error Aggregated Detector), to unify inconsistencies from classifier and autoencoder.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.