Search Results for author: Yizhen Wang

Found 8 papers, 2 papers with code

Burning the Adversarial Bridges: Robust Windows Malware Detection Against Binary-level Mutations

no code implementations5 Oct 2023 Ahmed Abusnaina, Yizhen Wang, Sunpreet Arora, Ke Wang, Mihai Christodorescu, David Mohaisen

Highlighting volatile information channels within the software, we introduce three software pre-processing steps to eliminate the attack surface, namely, padding removal, software stripping, and inter-section information resetting.

Malware Detection

Adversarial Example Detection Using Latent Neighborhood Graph

no code implementations ICCV 2021 Ahmed Abusnaina, Yuhang Wu, Sunpreet Arora, Yizhen Wang, Fei Wang, Hao Yang, David Mohaisen

We present the first graph-based adversarial detection method that constructs a Latent Neighborhood Graph (LNG) around an input example to determine if the input example is adversarial.

Adversarial Attack Graph Attention

Robust and Accurate Authorship Attribution via Program Normalization

no code implementations1 Jul 2020 Yizhen Wang, Mohannad Alhanahnah, Ke Wang, Mihai Christodorescu, Somesh Jha

To address these emerging issues, we formulate this security challenge into a general threat model, the $\textit{relational adversary}$, that allows an arbitrary number of the semantics-preserving transformations to be applied to an input in any problem space.

Authorship Attribution Image Classification +1

An Investigation of Data Poisoning Defenses for Online Learning

no code implementations28 May 2019 Yizhen Wang, Somesh Jha, Kamalika Chaudhuri

Data poisoning attacks -- where an adversary can modify a small fraction of training data, with the goal of forcing the trained classifier to high loss -- are an important threat for machine learning in many applications.

Data Poisoning General Classification

Data Poisoning Attacks against Online Learning

no code implementations27 Aug 2018 Yizhen Wang, Kamalika Chaudhuri

While there has been much prior work on data poisoning, most of it is in the offline setting, and attacks for online learning, where training data arrives in a streaming manner, are not well understood.

Data Poisoning

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

1 code implementation ICML 2018 Yizhen Wang, Somesh Jha, Kamalika Chaudhuri

Our analysis shows that its robustness properties depend critically on the value of k - the classifier may be inherently non-robust for small k, but its robustness approaches that of the Bayes Optimal classifier for fast-growing k. We propose a novel modified 1-nearest neighbor classifier, and guarantee its robustness in the large sample limit.

Pufferfish Privacy Mechanisms for Correlated Data

no code implementations13 Mar 2016 Shuang Song, Yizhen Wang, Kamalika Chaudhuri

Since this mechanism may be computationally inefficient, we provide an additional mechanism that applies to some practical cases such as physical activity measurements across time, and is computationally efficient.

Cannot find the paper you are looking for? You can Submit a new open access paper.