Search Results for author: Bita Darvish Rouhani

Found 11 papers, 3 papers with code

Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques

no code implementations14 Mar 2025 Neusha Javidnia, Bita Darvish Rouhani, Farinaz Koushanfar

Large language models (LLMs) have demonstrated exceptional capabilities in generating text, images, and video content.

ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration

1 code implementation10 Mar 2025 Mengting Ai, Tianxin Wei, Yifan Chen, Zhichen Zeng, Ritchie Zhao, Girish Varatkar, Bita Darvish Rouhani, Xianfeng Tang, Hanghang Tong, Jingrui He

Mixture-of-Experts (MoE) Transformer, the backbone architecture of multiple phenomenal language models, leverages sparsity by activating only a fraction of model parameters for each input token.

BlackMarks: Black-box Multi-bit Watermarking for Deep Neural Networks

no code implementations ICLR 2019 Huili Chen, Bita Darvish Rouhani, Farinaz Koushanfar

To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme.

SWNet: Small-World Neural Networks and Rapid Convergence

no code implementations9 Apr 2019 Mojan Javaheripi, Bita Darvish Rouhani, Farinaz Koushanfar

This transformation leverages our important observation that for a set level of accuracy, convergence is fastest when network topology reaches the boundary of a Small-World Network.

General Classification Image Classification

AgileNet: Lightweight Dictionary-based Few-shot Learning

no code implementations21 May 2018 Mohammad Ghasemzadeh, Fang Lin, Bita Darvish Rouhani, Farinaz Koushanfar, Ke Huang

The success of deep learning models is heavily tied to the use of massive amount of labeled data and excessively long training time.

Few-Shot Learning

DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models

2 code implementations2 Apr 2018 Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar

The resulting models are therefore considered to be the IP of the model builder and need to be protected to preserve the owner's competitive advantage.

Cryptography and Security

Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks

no code implementations ICLR 2018 Bita Darvish Rouhani, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model.

Deep Learning

DeepFense: Online Accelerated Defense Against Adversarial Deep Learning

no code implementations8 Sep 2017 Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi, Farinaz Koushanfar

Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems.

Deep Learning

DeepSecure: Scalable Provably-Secure Deep Learning

no code implementations24 May 2017 Bita Darvish Rouhani, M. Sadegh Riazi, Farinaz Koushanfar

This paper proposes DeepSecure, a novel framework that enables scalable execution of the state-of-the-art Deep Learning (DL) models in a privacy-preserving setting.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.