no code implementations • 24 Feb 2022 • Amir Ardakani, Arash Ardakani, Brett Meyer, James J. Clark, Warren J. Gross
Quantization of deep neural networks is a promising approach that reduces the inference cost, making it feasible to run deep networks on resource-restricted devices.
no code implementations • NeurIPS 2020 • Arash Ardakani, Amir Ardakani, Warren Gross
Therefore, our FSM-based model can learn extremely long-term dependencies as it requires 1/l memory storage during training compared to LSTMs, where l is the number of time steps.
no code implementations • NeurIPS 2019 • Arash Ardakani, Zhengyun Ji, Amir Ardakani, Warren Gross
The emergence of XNOR networks seek to reduce the model size and computational cost of neural networks for their deployment on specialized hardware requiring real-time processes with limited hardware resources.