no code implementations • 26 Jan 2023 • Yuji Chai, Devashree Tripathy, Chuteng Zhou, Dibakar Gope, Igor Fedorov, Ramon Matas, David Brooks, Gu-Yeon Wei, Paul Whatmough
The ability to accurately predict deep neural network (DNN) inference performance metrics, such as latency, power, and memory footprint, for an arbitrary DNN on a target hardware platform is essential to the design of DNN based models.
1 code implementation • 17 Aug 2022 • Kartikeya Bhardwaj, James Ward, Caleb Tung, Dibakar Gope, Lingchuan Meng, Igor Fedorov, Alex Chalfin, Paul Whatmough, Danny Loh
To address this question, we propose a new paradigm called Restructurable Activation Networks (RANs) that manipulate the amount of non-linearity in models to improve their hardware-awareness and efficiency.
1 code implementation • 28 Feb 2022 • Nikita Kuzmin, Igor Fedorov, Alexey Sholokhov
We propose a new probabilistic speaker embedding extractor using the information encoded in the embedding magnitude and leverage it in the speaker verification pipeline.
no code implementations • 15 Jan 2022 • Igor Fedorov, Ramon Matas, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul Whatmough
Deploying TinyML models on low-cost IoT hardware is very challenging, due to limited device memory capacity.
no code implementations • 29 Sep 2021 • Anil Kag, Igor Fedorov, Aditya Gangrade, Paul Whatmough, Venkatesh Saligrama
The first network is a low-capacity network that can be deployed on an edge device, whereas the second is a high-capacity network deployed in the cloud.
1 code implementation • 21 Oct 2020 • Colby Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough
To address this challenge, neural architecture search (NAS) promises to help design accurate ML models that meet the tight MCU memory, latency and energy constraints.
Ranked #1 on
Keyword Spotting
on Google Speech Commands V2 12
1 code implementation • 22 May 2020 • Sandeep Singh Sandha, Mohit Aggarwal, Igor Fedorov, Mani Srivastava
Tuning hyperparameters for machine learning algorithms is a tedious task, one that is typically done manually.
1 code implementation • 20 May 2020 • Igor Fedorov, Marko Stamenovic, Carl Jensen, Li-Chia Yang, Ari Mandell, Yiming Gan, Matthew Mattina, Paul N. Whatmough
Modern speech enhancement algorithms achieve remarkable noise suppression by means of large recurrent neural networks (RNNs).
no code implementations • 4 Oct 2019 • Urmish Thakker, Igor Fedorov, Jesse Beu, Dibakar Gope, Chu Zhou, Ganesh Dasika, Matthew Mattina
This paper introduces a method to compress RNNs for resource constrained environments using Kronecker product (KP).
no code implementations • 7 Jun 2019 • Urmish Thakker, Jesse Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, Matthew Mattina
Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size. As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task accuracy.
no code implementations • NeurIPS 2019 • Igor Fedorov, Ryan P. Adams, Matthew Mattina, Paul N. Whatmough
The vast majority of processors in the world are actually microcontroller units (MCUs), which find widespread use performing simple control tasks in applications ranging from automobiles to medical devices and office equipment.
no code implementations • 10 Apr 2018 • Igor Fedorov, Bhaskar D. Rao
This paper addresses the problem of learning dictionaries for multimodal datasets, i. e. datasets collected from multiple data sources.
no code implementations • 5 Feb 2018 • Igor Fedorov, Bhaskar D. Rao
This paper addresses the topic of sparsifying deep neural networks (DNN's).
no code implementations • 30 Mar 2017 • Igor Fedorov, Ritwik Giri, Bhaskar D. Rao, Truong Q. Nguyen
We propose a novel method called the Relevance Subject Machine (RSM) to solve the person re-identification (re-id) problem.
no code implementations • 6 May 2016 • Igor Fedorov, Ritwik Giri, Bhaskar D. Rao, Truong Q. Nguyen
In this paper, we present a novel Bayesian approach to recover simultaneously block sparse signals in the presence of outliers.
no code implementations • 7 Apr 2016 • Igor Fedorov, Alican Nalci, Ritwik Giri, Bhaskar D. Rao, Truong Q. Nguyen, Harinath Garudadri
We show that the proposed framework encompasses a large class of S-NNLS algorithms and provide a computationally efficient inference procedure based on multiplicative update rules.
no code implementations • 22 Jan 2016 • Alican Nalci, Igor Fedorov, Maher Al-Shoukairi, Thomas T. Liu, Bhaskar D. Rao
We refer to the proposed method as rectified sparse Bayesian learning (R-SBL).