Search Results for author: Zhuoran Liu

Found 15 papers, 10 papers with code

Beyond Neural-on-Neural Approaches to Speaker Gender Protection

1 code implementation30 Jun 2023 Loes Van Bemmel, Zhuoran Liu, Nik Vaessen, Martha Larson

Currently, the common practice for developing and testing gender protection algorithms is "neural-on-neural", i. e., perturbations are generated and tested with a neural network.

Attribute

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

1 code implementation31 Jan 2023 Zhuoran Liu, Zhengyu Zhao, Martha Larson

Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.

Generative Poisoning Using Random Discriminators

1 code implementation2 Nov 2022 Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson

We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.

Data Poisoning

Level Up with RealAEs: Leveraging Domain Constraints in Feature Space to Strengthen Robustness of Android Malware Detection

no code implementations30 May 2022 Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy

Realistic attacks in the Android malware domain create Realizable Adversarial Examples (RealAEs), i. e., AEs that satisfy the domain constraints of Android malware.

Adversarial Robustness Android Malware Detection +2

Going Grayscale: The Road to Understanding and Improving Unlearnable Examples

1 code implementation25 Nov 2021 Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson

Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

4 code implementations NeurIPS 2021 Zhengyu Zhao, Zhuoran Liu, Martha Larson

In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.

Adversarial Image Color Transformations in Explicit Color Filter Space

1 code implementation12 Nov 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.

Adversarial Robustness

Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start

1 code implementation2 Jun 2020 Zhuoran Liu, Martha Larson

Our experiments evaluate the danger of these attacks when mounted against three representative visually-aware recommender algorithms in a framework that uses images to address cold start.

Recommendation Systems

Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

1 code implementation3 Feb 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.

Image Enhancement

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

2 code implementations CVPR 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.

Image Classification

Non-Determinism in Neural Networks for Adversarial Robustness

no code implementations26 May 2019 Daanish Ali Khan, Linhong Li, Ninghao Sha, Zhuoran Liu, Abelino Jimenez, Bhiksha Raj, Rita Singh

Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas.

Adversarial Robustness

Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval

1 code implementation29 Jan 2019 Zhuoran Liu, Zhengyu Zhao, Martha Larson

An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.

Blocking Content-Based Image Retrieval +1

Exploiting Unlabeled Data for Neural Grammatical Error Detection

no code implementations28 Nov 2016 Zhuoran Liu, Yang Liu

Identifying and correcting grammatical errors in the text written by non-native writers has received increasing attention in recent years.

Binary Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.