Search Results for author: Raghu Mudumbai

Found 4 papers, 0 papers with code

Optimal Pooling Matrix Design for Group Testing with Dilution (Row Degree) Constraints

no code implementations5 Aug 2020 Jirong Yi, Myung Cho, Xiaodong Wu, Raghu Mudumbai, Weiyu Xu

In this paper, we consider the problem of designing optimal pooling matrix for group testing (for example, for COVID-19 virus testing) with the constraint that no more than $r>0$ samples can be pooled together, which we call "dilution constraint".

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

no code implementations28 Jul 2020 Jirong Yi, Raghu Mudumbai, Weiyu Xu

We consider the theoretical problem of designing an optimal adversarial attack on a decision system that maximally degrades the achievable performance of the system as measured by the mutual information between the degraded signal and the label of interest.

Adversarial Attack

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

no code implementations26 Mar 2020 Zain Khan, Jirong Yi, Raghu Mudumbai, Xiaodong Wu, Weiyu Xu

Recent works have demonstrated the existence of {\it adversarial examples} targeting a single machine learning system.

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

no code implementations27 Jan 2019 Hui Xie, Jirong Yi, Weiyu Xu, Raghu Mudumbai

We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.