Search Results for author: Daniel Park

Found 7 papers, 0 papers with code

Universal Paralinguistic Speech Representations Using Self-Supervised Conformers

no code implementations9 Oct 2021 Joel Shor, Aren Jansen, Wei Han, Daniel Park, Yu Zhang

Many speech applications require understanding aspects beyond the words being spoken, such as recognizing emotion, detecting whether the speaker is wearing a mask, or distinguishing real from synthetic speech.

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

no code implementations8 Jul 2021 Daniel Park, Haidar Khan, Azer Khan, Alex Gittens, Bülent Yener

Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model in a "white box" setting and to the opposite in a "black box" setting.

SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network

no code implementations5 Apr 2021 William Chan, Daniel Park, Chris Lee, Yu Zhang, Quoc Le, Mohammad Norouzi

We present SpeechStew, a speech recognition model that is trained on a combination of various publicly available speech recognition datasets: AMI, Broadcast News, Common Voice, LibriSpeech, Switchboard/Fisher, Tedlium, and Wall Street Journal.

Speech Recognition Transfer Learning

A survey on practical adversarial examples for malware classifiers

no code implementations6 Nov 2020 Daniel Park, Bülent Yener

To fully understand the impact of adversarial examples on malware detection, we review practical attacks against malware classifiers that generate executable adversarial malware examples.

Malware Detection

Towards Obfuscated Malware Detection for Low Powered IoT Devices

no code implementations6 Nov 2020 Daniel Park, Hannah Powers, Benji Prashker, Leland Liu, Bülent Yener

It is imperative to protect these devices as they become more prevalent in commercial and personal networks.

Malware Detection

Thwarting finite difference adversarial attacks with output randomization

no code implementations ICLR 2020 Haidar Khan, Daniel Park, Azer Khan, Bülent Yener

Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model and to the opposite "black box" setting.

Adversarial Attack

Generation & Evaluation of Adversarial Examples for Malware Obfuscation

no code implementations9 Apr 2019 Daniel Park, Haidar Khan, Bülent Yener

There has been an increased interest in the application of convolutional neural networks for image based malware classification, but the susceptibility of neural networks to adversarial examples allows malicious actors to evade classifiers.

General Classification Malware Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.