Search Results for author: Mojan Javaheripi

Found 17 papers, 1 papers with code

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

no code implementations22 Apr 2024 Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Olatunji Ruwase, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, ZiYi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yunan Zhang, Xiren Zhou

We introduce phi-3-mini, a 3. 8 billion parameter language model trained on 3. 3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3. 5 (e. g., phi-3-mini achieves 69% on MMLU and 8. 38 on MT-bench), despite being small enough to be deployed on a phone.

NetFlick: Adversarial Flickering Attacks on Deep Learning Based Video Compression

no code implementations4 Apr 2023 Jung-Woo Chang, Nojan Sheybani, Shehzeen Samarah Hussain, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar

Experimental results demonstrate that NetFlick can successfully deteriorate the performance of video compression frameworks in both digital- and physical-settings and can be further extended to attack downstream video classification networks.

Video Classification Video Compression

zPROBE: Zero Peek Robustness Checks for Federated Learning

no code implementations ICCV 2023 Zahra Ghodsi, Mojan Javaheripi, Nojan Sheybani, Xinqiao Zhang, Ke Huang, Farinaz Koushanfar

However, keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected.

Federated Learning Privacy Preserving

RoVISQ: Reduction of Video Service Quality via Adversarial Attacks on Deep Learning-based Video Compression

no code implementations18 Mar 2022 Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar

In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems.

Adversarial Attack Classification +4

HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks

no code implementations2 Nov 2021 Mojan Javaheripi, Farinaz Koushanfar

We propose HASHTAG, the first framework that enables high-accuracy detection of fault-injection attacks on Deep Neural Networks (DNNs) with provable bounds on detection performance.

Fault Detection

Trojan Signatures in DNN Weights

no code implementations7 Sep 2021 Greg Fields, Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar, Tara Javidi

Deep neural networks have been shown to be vulnerable to backdoor, or trojan, attacks where an adversary has embedded a trigger in the network at training time such that the model correctly classifies all standard inputs, but generates a targeted, incorrect classification on any input which contains the trigger.

CLEANN: Accelerated Trojan Shield for Embedded Neural Networks

no code implementations4 Sep 2020 Mojan Javaheripi, Mohammad Samragh, Gregory Fields, Tara Javidi, Farinaz Koushanfar

We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.

Dictionary Learning

Extracurricular Learning: Knowledge Transfer Beyond Empirical Distribution

no code implementations30 Jun 2020 Hadi Pouransari, Mojan Javaheripi, Vinay Sharma, Oncel Tuzel

We propose extracurricular learning, a novel knowledge distillation method, that bridges this gap by (1) modeling student and teacher output distributions; (2) sampling examples from an approximation to the underlying data distribution; and (3) matching student and teacher output distributions over this extended set including uncertain samples.

Image Classification Knowledge Distillation +2

GeneCAI: Genetic Evolution for Acquiring Compact AI

no code implementations8 Apr 2020 Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

In the contemporary big data realm, Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.

Model Compression

FastWave: Accelerating Autoregressive Convolutional Neural Networks on FPGA

no code implementations9 Feb 2020 Shehzeen Hussain, Mojan Javaheripi, Paarth Neekhara, Ryan Kastner, Farinaz Koushanfar

While WaveNet produces state-of-the art audio generation results, the naive inference implementation is quite slow; it takes a few minutes to generate just one second of audio on a high-end GPU.

Audio Generation Audio Synthesis +3

ASCAI: Adaptive Sampling for acquiring Compact AI

no code implementations15 Nov 2019 Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

This paper introduces ASCAI, a novel adaptive sampling methodology that can learn how to effectively compress Deep Neural Networks (DNNs) for accelerated inference on resource-constrained platforms.

Model Compression

SWNet: Small-World Neural Networks and Rapid Convergence

no code implementations9 Apr 2019 Mojan Javaheripi, Bita Darvish Rouhani, Farinaz Koushanfar

This transformation leverages our important observation that for a set level of accuracy, convergence is fastest when network topology reaches the boundary of a Small-World Network.

General Classification Image Classification

CodeX: Bit-Flexible Encoding for Streaming-based FPGA Acceleration of DNNs

no code implementations17 Jan 2019 Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar

CodeX incorporates nonlinear encoding to the computation flow of neural networks to save memory.

DeepFense: Online Accelerated Defense Against Adversarial Deep Learning

no code implementations8 Sep 2017 Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi, Farinaz Koushanfar

Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.