1 code implementation • 25 Nov 2021 • Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson
Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.
1 code implementation • NeurIPS 2021 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.
1 code implementation • 12 Nov 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In this paper, we carry out a systematic robustness analysis of ACE from both the attack and defense perspectives by varying the bound of the color filter parameters.
1 code implementation • EMNLP 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu
Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.
1 code implementation • 3 Feb 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.
2 code implementations • CVPR 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.
1 code implementation • 29 Jan 2019 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.
1 code implementation • 23 Jul 2018 • Zhengyu Zhao, Martha Larson
As deep learning approaches to scene recognition emerge, they have continued to leverage discriminative regions at multiple scales, building on practices established by conventional image classification research.