no code implementations • 8 Jan 2024 • Jennifer Williams, Karla Pizzi, Paul-Gauthier Noe, Sneha Das
Most recent speech privacy efforts have focused on anonymizing acoustic speaker attributes but there has not been as much research into protecting information from speech content.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 5 Oct 2023 • Armin Ettenhofer, Jan-Philipp Schulze, Karla Pizzi
Audio adversarial examples are audio files that have been manipulated to fool an automatic speech recognition (ASR) system, while still sounding benign to a human listener.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 21 Jan 2023 • Jennifer Williams, Karla Pizzi, Shuvayanti Das, Paul-Gauthier Noe
Privacy in speech and audio has many facets.
no code implementations • 9 Jan 2023 • Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger
To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup.
no code implementations • 20 Jul 2021 • Nicolas M. Müller, Karla Pizzi, Jennifer Williams
The recent emergence of deepfakes has brought manipulated and generated content to the forefront of machine learning research.