1 code implementation • 17 Sep 2022 • Raphael Olivier, Hadi Abdullah, Bhiksha Raj
To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i. e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR.
no code implementations • 10 Mar 2022 • Hadi Abdullah, Aditya Karlekar, Saurabh Prasad, Muhammad Sajidur Rahman, Logan Blue, Luke A. Bauer, Vincent Bindschaedler, Patrick Traynor
We begin by comparing 20 recent attack papers, classifying and measuring their suitability to serve as the basis of new "robust to transcription" but "easy for humans to understand" CAPTCHAs.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • ICLR 2022 • Hadi Abdullah, Aditya Karlekar, Vincent Bindschaedler, Patrick Traynor
The targeted transferability of adversarial samples enables attackers to exploit black-box models in the real-world.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 13 Jul 2020 • Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor
Like other systems based on neural networks, recent research has demonstrated that speech and speaker recognition systems are vulnerable to attacks using manipulated inputs.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 11 Oct 2019 • Hadi Abdullah, Muhammad Sajidur Rahman, Washington Garcia, Logan Blue, Kevin Warren, Anurag Swarnim Yadav, Tom Shrimpton, Patrick Traynor
Automatic speech recognition and voice identification systems are being deployed in a wide array of applications, from providing control mechanisms to devices lacking traditional interfaces, to the automatic transcription of conversations and authentication of users.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 18 Mar 2019 • Hadi Abdullah, Washington Garcia, Christian Peeters, Patrick Traynor, Kevin R. B. Butler, Joseph Wilson
In this paper, we break these dependencies and make hidden command attacks more practical through model-agnostic (blackbox) attacks, which exploit knowledge of the signal processing algorithms commonly used by VPSes to generate the data fed into machine learning systems.