1 code implementation • 30 Jun 2023 • Loes Van Bemmel, Zhuoran Liu, Nik Vaessen, Martha Larson
Currently, the common practice for developing and testing gender protection algorithms is "neural-on-neural", i. e., perturbations are generated and tested with a neural network.
1 code implementation • 31 Jan 2023 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.
1 code implementation • 2 Nov 2022 • Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson
We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.
no code implementations • 16 Sep 2022 • Zhuoran Liu, Leqi Zou, Xuan Zou, Caihua Wang, Biao Zhang, Da Tang, Bolin Zhu, Yijie Zhu, Peng Wu, Ke Wang, Youlong Cheng
In this paper, we present Monolith, a system tailored for online training.
no code implementations • 30 May 2022 • Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy
Realistic attacks in the Android malware domain create Realizable Adversarial Examples (RealAEs), i. e., AEs that satisfy the domain constraints of Android malware.
1 code implementation • 25 Nov 2021 • Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson
Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.
4 code implementations • NeurIPS 2021 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.
1 code implementation • 12 Nov 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.
1 code implementation • 2 Jun 2020 • Zhuoran Liu, Martha Larson
Our experiments evaluate the danger of these attacks when mounted against three representative visually-aware recommender algorithms in a framework that uses images to address cold start.
1 code implementation • 3 Feb 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.
2 code implementations • CVPR 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.
no code implementations • SEMEVAL 2019 • Zhuoran Liu, Shivali Goel, Mukund Yelahanka Raghuprasad, Smar Muresan, a
The paper presents Columbia team{'}s participation in the SemEval 2019 Shared Task 7: RumourEval 2019.
no code implementations • 26 May 2019 • Daanish Ali Khan, Linhong Li, Ninghao Sha, Zhuoran Liu, Abelino Jimenez, Bhiksha Raj, Rita Singh
Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas.
1 code implementation • 29 Jan 2019 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.
no code implementations • 28 Nov 2016 • Zhuoran Liu, Yang Liu
Identifying and correcting grammatical errors in the text written by non-native writers has received increasing attention in recent years.