no code implementations • NeurIPS 2023 • Yiwei Lu, YaoLiang Yu, Xinlin Li, Vahid Partovi Nia
In neural network binarization, BinaryConnect (BC) and its variants are considered the standard.
1 code implementation • 29 Jun 2023 • Phuoc-Hoan Charles Le, Xinlin Li
With the increasing popularity and the increasing size of vision transformers (ViTs), there has been an increasing interest in making them more efficient and less computationally costly for deployment on edge devices with limited computing resources.
Ranked #1 on
Image Classification
on imagenet-1k
no code implementations • 24 Mar 2023 • Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R. Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, Wulong Liu, Boxing Chen
Deep models are dominating the artificial intelligence (AI) industry since the ImageNet challenge in 2012.
no code implementations • 22 Dec 2022 • Xinlin Li, Mariana Parazeres, Adam Oberman, Alireza Ghaffari, Masoud Asgharian, Vahid Partovi Nia
With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices.
2 code implementations • ICCV 2023 • Xinlin Li, Bang Liu, Rui Heng Yang, Vanessa Courville, Chao Xing, Vahid Partovi Nia
We further propose a sign-scale decomposition design to enhance training efficiency and a low-variance random initialization strategy to improve the model's transfer learning performance.
no code implementations • 15 Jul 2022 • Anderson R. Avila, Khalil Bibi, Rui Heng Yang, Xinlin Li, Chao Xing, Xiao Chen
Deep neural networks (DNN) have achieved impressive success in multiple domains.
no code implementations • 28 Jun 2022 • Matteo Cacciola, Antonio Frangioni, Xinlin Li, Andrea Lodi
In Machine Learning, Artificial Neural Networks (ANNs) are a very powerful tool, broadly used in many applications.
1 code implementation • NeurIPS 2021 • Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia
Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy efficient compared to conventional neural networks.
no code implementations • NeurIPS 2021 • Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia
Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks.
no code implementations • NeurIPS 2021 • Mariana Oliveira Prazeres, Xinlin Li, Vahid Partovi Nia, Adam M Oberman
In order to deploy deep neural networks on edge devices, compressed (resource efficient) networks need to be developed.
no code implementations • 9 Jun 2020 • Alejandro Murua, Ramchalam Ramakrishnan, Xinlin Li, Rui Heng Yang, Vahid Partovi Nia
Recurrent neural networks (RNN) such as long-short-term memory (LSTM) networks are essential in a multitude of daily live tasks such as speech, language, video, and multimodal learning.
no code implementations • 8 Jun 2020 • Vahid Partovi Nia, Xinlin Li, Masoud Asgharian, Shoubo Hu, Zhitang Chen, Yanhui Geng
Our simulation result show that the proposed adjustment significantly improves the performance of the causal direction test statistic for heterogeneous data.
no code implementations • 21 Apr 2020 • Mahdi Zolnouri, Xinlin Li, Vahid Partovi Nia
Training large-scale deep neural networks is a long, time-consuming operation, often requiring many GPUs to accelerate.
no code implementations • 30 Sep 2019 • Xinlin Li, Vahid Partovi Nia
Binary neural networks improve computationally efficiency of deep models with a large margin.
no code implementations • 25 Sep 2019 • Xinlin Li, Vahid Partovi Nia
Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently.