no code implementations • 23 May 2023 • Micah Musser, Andrew Lohn, James X. Dempsey, Jonathan Spring, Ram Shankar Siva Kumar, Brenda Leong, Christina Liaghati, Cindy Martinez, Crystal D. Grant, Daniel Rohrer, Heather Frase, Jonathan Elliott, John Bansemer, Mikel Rodriguez, Mitt Regan, Rumman Chowdhury, Stefan Hermanek
In July 2022, the Center for Security and Emerging Technology (CSET) at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities.
no code implementations • ICML Workshop AML 2021 • Kendra Albert, Maggie Delano, Bogdan Kulynych, Ram Shankar Siva Kumar
In this paper, we review the broader impact statements that adversarial ML researchers wrote as part of their NeurIPS 2020 papers and assess the assumptions that authors have about the goals of their work.
no code implementations • 3 Dec 2020 • Kendra Albert, Maggie Delano, Jonathon Penney, Afsaneh Rigot, Ram Shankar Siva Kumar
This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects.
Computers and Society
no code implementations • 29 Jun 2020 • Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert
Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities.
no code implementations • 4 Feb 2020 • Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, Sharon Xia
Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems.
no code implementations • 1 Feb 2020 • Kendra Albert, Jonathon Penney, Bruce Schneier, Ram Shankar Siva Kumar
In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems.
2 code implementations • 25 Nov 2019 • Ram Shankar Siva Kumar, David O Brien, Kendra Albert, Salomé Viljöen, Jeffrey Snover
In the last two years, more than 200 papers have been written on how machine learning (ML) systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial failure modes.
no code implementations • 25 Oct 2018 • Ram Shankar Siva Kumar, David R. O'Brien, Kendra Albert, Salome Vilojen
When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond?
no code implementations • 17 Nov 2017 • Nathan Wiebe, Ram Shankar Siva Kumar
Finally, we provide a private form of $k$--means clustering that can be used to prevent an all powerful adversary from learning more than a small fraction of a bit from any user.
no code implementations • 20 Sep 2017 • Ram Shankar Siva Kumar, Andrew Wicker, Matt Swann
Operationalizing machine learning based security detections is extremely challenging, especially in a continuously evolving cloud environment.