no code implementations • 17 Jan 2024 • Wenxin Ding, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng
In this paper, we explore the feasibility of generating multiple versions of a model that possess different attack properties, without acquiring new training data or changing model architecture.
no code implementations • 24 Dec 2023 • Shufang Zhang, Minxue Ni, Lei Wang, Wenxin Ding, Shuai Chen, Yuhong Liu
The Diffusion model has a strong ability to generate wild images.
no code implementations • 20 Oct 2023 • Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao
In this paper, we show that poisoning attacks can be successful on generative models.
no code implementations • 21 Feb 2023 • Sihui Dai, Wenxin Ding, Arjun Nitin Bhagoji, Daniel Cullina, Ben Y. Zhao, Haitao Zheng, Prateek Mittal
Finding classifiers robust to adversarial examples is critical for their safe deployment.
no code implementations • 29 Jun 2020 • Wenxin Ding, Nihar B. Shah, Weina Wang
The crux of the framework lies in recognizing that a part of the data pertaining to the reviews is already available in public, and we use this information to post-process the data released by any privacy mechanism in a manner that improves the accuracy (utility) of the data while retaining the privacy guarantees.