no code implementations • 2 Feb 2024 • Samuel Stevens, Emily Wenger, Cathy Li, Niklas Nolte, Eshika Saxena, François Charton, Kristin Lauter
Our architecture improvements enable scaling to larger-dimension LWE problems: this work is the first instance of ML attacks recovering sparse binary secrets in dimension $n=1024$, the smallest dimension used in practice for homomorphic encryption applications of LWE where sparse binary secrets are proposed.
no code implementations • 7 Mar 2023 • Cathy Li, Jana Sotáková, Emily Wenger, Mohamed Malhou, Evrard Garcelon, Francois Charton, Kristin Lauter
However, this attack assumes access to millions of eavesdropped LWE samples and fails at higher Hamming weights or dimensions.
no code implementations • 29 Aug 2022 • Emily Wenger, Xiuyu Li, Ben Y. Zhao, Vitaly Shmatikov
With only query access to a trained model and no knowledge of the model training process, or control of the data labels, a user can apply statistical hypothesis testing to detect if a model has learned the spurious features associated with their isotopes by training on the user's data.
no code implementations • 11 Jul 2022 • Emily Wenger, Mingjie Chen, François Charton, Kristin Lauter
Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers.
1 code implementation • 21 Jun 2022 • Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine Passananti, Emilio Andere, Haitao Zheng, Ben Y. Zhao
Research on physical backdoors is limited by access to large datasets containing real images of physical objects co-located with targets of classification.
no code implementations • 11 Feb 2022 • Emily Wenger, Francesca Falzon, Josephine Passananti, Haitao Zheng, Ben Y. Zhao
In deep neural networks for facial recognition, feature vectors are numerical representations that capture the unique features of a given face.
no code implementations • 8 Dec 2021 • Emily Wenger, Shawn Shan, Haitao Zheng, Ben Y. Zhao
The rapid adoption of facial recognition (FR) technology by both government and commercial entities in recent years has raised concerns about civil liberties and privacy.
no code implementations • 20 Sep 2021 • Emily Wenger, Max Bronckers, Christian Cianfarani, Jenna Cryan, Angela Sha, Haitao Zheng, Ben Y. Zhao
Advances in deep learning have introduced a new wave of voice synthesis tools, capable of producing audio that sounds as if spoken by a target speaker.
no code implementations • CVPR 2021 • Emily Wenger, Josephine Passananti, Arjun Bhagoji, Yuanshun Yao, Haitao Zheng, Ben Y. Zhao
A critical question remains unanswered: can backdoor attacks succeed using physical objects as triggers, thus making them a credible threat against deep learning systems in the real world?
1 code implementation • 24 Jun 2020 • Huiying Li, Shawn Shan, Emily Wenger, Jiayun Zhang, Hai-Tao Zheng, Ben Y. Zhao
In particular, query-based black-box attacks do not require knowledge of the deep learning model, but can compute adversarial examples over the network by submitting queries and inspecting returns.
1 code implementation • 19 Feb 2020 • Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Hai-Tao Zheng, Ben Y. Zhao
In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models.
1 code implementation • 2 Oct 2019 • Huiying Li, Emily Wenger, Shawn Shan, Ben Y. Zhao, Haitao Zheng
We empirically show that our proposed watermarks achieve piracy resistance and other watermark properties, over a wide range of tasks and models.
1 code implementation • 18 Apr 2019 • Shawn Shan, Emily Wenger, Bolun Wang, Bo Li, Hai-Tao Zheng, Ben Y. Zhao
Attackers' optimization algorithms gravitate towards trapdoors, leading them to produce attacks similar to trapdoors in the feature space.