Search Results for author: Jirong Yi

Found 9 papers, 0 papers with code

Solving Large Scale Quadratic Constrained Basis Pursuit

no code implementations2 Apr 2021 Jirong Yi

Inspired by alternating direction method of multipliers and the idea of operator splitting, we propose a efficient algorithm for solving large-scale quadratically constrained basis pursuit.

Optimal Pooling Matrix Design for Group Testing with Dilution (Row Degree) Constraints

no code implementations5 Aug 2020 Jirong Yi, Myung Cho, Xiaodong Wu, Raghu Mudumbai, Weiyu Xu

In this paper, we consider the problem of designing optimal pooling matrix for group testing (for example, for COVID-19 virus testing) with the constraint that no more than $r>0$ samples can be pooled together, which we call "dilution constraint".

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

no code implementations28 Jul 2020 Jirong Yi, Raghu Mudumbai, Weiyu Xu

We consider the theoretical problem of designing an optimal adversarial attack on a decision system that maximally degrades the achievable performance of the system as measured by the mutual information between the degraded signal and the label of interest.

Adversarial Attack

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

no code implementations26 Mar 2020 Zain Khan, Jirong Yi, Raghu Mudumbai, Xiaodong Wu, Weiyu Xu

Recent works have demonstrated the existence of {\it adversarial examples} targeting a single machine learning system.

Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks

no code implementations25 May 2019 Jirong Yi, Hui Xie, Leixin Zhou, Xiaodong Wu, Weiyu Xu, Raghuraman Mudumbai

In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

no code implementations27 Jan 2019 Hui Xie, Jirong Yi, Weiyu Xu, Raghu Mudumbai

We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.

Outlier Detection using Generative Models with Theoretical Performance Guarantees

no code implementations26 Oct 2018 Jirong Yi, Anh Duc Le, Tianming Wang, Xiaodong Wu, Weiyu Xu

In this paper, we propose a generative model neural network approach for reconstructing the ground truth signals under sparse outliers.

Outlier Detection

Necessary and Sufficient Null Space Condition for Nuclear Norm Minimization in Low-Rank Matrix Recovery

no code implementations14 Feb 2018 Jirong Yi, Weiyu Xu

In [12, 14, 15], the authors established the necessary and sufficient null space conditions for nuclear norm minimization to recover every possible low-rank matrix with rank at most r (the strong null space condition).

Collaborative Filtering

Separation-Free Super-Resolution from Compressed Measurements is Possible: an Orthonormal Atomic Norm Minimization Approach

no code implementations4 Nov 2017 Weiyu Xu, Jirong Yi, Soura Dasgupta, Jian-Feng Cai, Mathews Jacob, Myung Cho

However, it is known that in order for TV minimization and atomic norm minimization to recover the missing data or the frequencies, the underlying $R$ frequencies are required to be well-separated, even when the measurements are noiseless.

Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.