no code implementations • 13 Mar 2024 • Mingyue Cheng, Hao Zhang, Jiqian Yang, Qi Liu, Li Li, Xin Huang, Liwei Song, Zhi Li, Zhenya Huang, Enhong Chen
Through this gateway, users have the opportunity to submit their questions, testing the models on a personalized and potentially broader range of capabilities.
no code implementations • 15 Oct 2021 • Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal
The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.
no code implementations • 8 Jul 2020 • Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal
With our evaluation across 6 OOD detectors, we find that the choice of in-distribution data, model architecture and OOD data have a strong impact on OOD detection performance, inducing false positive rates in excess of $70\%$.
BIG-bench Machine Learning Out of Distribution (OOD) Detection
1 code implementation • NAACL 2021 • Liwei Song, Xinwei Yu, Hsuan-Tung Peng, Karthik Narasimhan
Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks, which are input-agnostic sequences of words added to text processed by classifiers.
1 code implementation • 24 Mar 2020 • Liwei Song, Prateek Mittal
Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model.
1 code implementation • 9 Mar 2020 • David Marco Sommer, Liwei Song, Sameer Wagh, Prateek Mittal
In this work, we take the first step in proposing a formal framework to study the design of such verification mechanisms for data deletion requests -- also known as machine unlearning -- in the context of systems that provide machine learning as a service (MLaaS).
1 code implementation • 24 May 2019 • Liwei Song, Reza Shokri, Prateek Mittal
To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.
no code implementations • 5 May 2019 • Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal
A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification.
1 code implementation • 24 Aug 2017 • Liwei Song, Prateek Mittal
Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries.
Cryptography and Security