Search Results for author: Yuxuan Zhu

Found 5 papers, 0 papers with code

FedTrans: Efficient Federated Learning Over Heterogeneous Clients via Model Transformation

no code implementations21 Apr 2024 Yuxuan Zhu, Jiachen Liu, Mosharaf Chowdhury, Fan Lai

Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices.

Federated Learning

Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation

no code implementations13 Feb 2024 Xuexin Chen, Ruichu Cai, Zhengting Huang, Yuxuan Zhu, Julien Horwood, Zhifeng Hao, Zijian Li, Jose Miguel Hernandez-Lobato

We investigate the problem of explainability in machine learning. To address this problem, Feature Attribution Methods (FAMs) measure the contribution of each feature through a perturbation test, where the difference in prediction is compared under different perturbations. However, such perturbation tests may not accurately distinguish the contributions of different features, when their change in prediction is the same after perturbation. In order to enhance the ability of FAMs to distinguish different features' contributions in this challenging setting, we propose to utilize the probability (PNS) that perturbing a feature is a necessary and sufficient cause for the prediction to change as a measure of feature importance. Our approach, Feature Attribution with Necessity and Sufficiency (FANS), computes the PNS via a perturbation test involving two stages (factual and interventional). In practice, to generate counterfactual samples, we use a resampling-based approach on the observed samples to approximate the required conditional distribution. Finally, we combine FANS and gradient-based optimization to extract the subset with the largest PNS. We demonstrate that FANS outperforms existing feature attribution methods on six benchmarks.

counterfactual

Where and How to Attack? A Causality-Inspired Recipe for Generating Counterfactual Adversarial Examples

no code implementations21 Dec 2023 Ruichu Cai, Yuxuan Zhu, Jie Qiao, Zefeng Liang, Furui Liu, Zhifeng Hao

By considering the underappreciated causal generating process, first, we pinpoint the source of the vulnerability of DNNs via the lens of causality, then give theoretical results to answer \emph{where to attack}.

counterfactual

On the Probability of Necessity and Sufficiency of Explaining Graph Neural Networks: A Lower Bound Optimization Approach

no code implementations14 Dec 2022 Ruichu Cai, Yuxuan Zhu, Xuexin Chen, Yuan Fang, Min Wu, Jie Qiao, Zhifeng Hao

To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose a framework of Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound.

counterfactual

A Survey on Explainable Anomaly Detection

no code implementations13 Oct 2022 Zhong Li, Yuxuan Zhu, Matthijs van Leeuwen

In the past two decades, most research on anomaly detection has focused on improving the accuracy of the detection, while largely ignoring the explainability of the corresponding methods and thus leaving the explanation of outcomes to practitioners.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.