Search Results for author: Xinlin Li

Found 15 papers, 3 papers with code

BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models

1 code implementation29 Jun 2023 Phuoc-Hoan Charles Le, Xinlin Li

With the increasing popularity and the increasing size of vision transformers (ViTs), there has been an increasing interest in making them more efficient and less computationally costly for deployment on edge devices with limited computing resources.

Binarization

Mathematical Challenges in Deep Learning

no code implementations24 Mar 2023 Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R. Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, Wulong Liu, Boxing Chen

Deep models are dominating the artificial intelligence (AI) industry since the ImageNet challenge in 2012.

EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

no code implementations22 Dec 2022 Xinlin Li, Mariana Parazeres, Adam Oberman, Alireza Ghaffari, Masoud Asgharian, Vahid Partovi Nia

With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices.

Quantization

DenseShift: Towards Accurate and Efficient Low-Bit Power-of-Two Quantization

1 code implementation ICCV 2023 Xinlin Li, Bang Liu, Rui Heng Yang, Vanessa Courville, Chao Xing, Vahid Partovi Nia

We further propose a sign-scale decomposition design to enhance training efficiency and a low-variance random initialization strategy to improve the model's transfer learning performance.

Quantization Transfer Learning

Deep Neural Networks pruning via the Structured Perspective Regularization

no code implementations28 Jun 2022 Matteo Cacciola, Antonio Frangioni, Xinlin Li, Andrea Lodi

In Machine Learning, Artificial Neural Networks (ANNs) are a very powerful tool, broadly used in many applications.

$S^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

1 code implementation NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy efficient compared to conventional neural networks.

S$^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

no code implementations NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks.

Tensor train decompositions on recurrent networks

no code implementations9 Jun 2020 Alejandro Murua, Ramchalam Ramakrishnan, Xinlin Li, Rui Heng Yang, Vahid Partovi Nia

Recurrent neural networks (RNN) such as long-short-term memory (LSTM) networks are essential in a multitude of daily live tasks such as speech, language, video, and multimodal learning.

A Causal Direction Test for Heterogeneous Populations

no code implementations8 Jun 2020 Vahid Partovi Nia, Xinlin Li, Masoud Asgharian, Shoubo Hu, Zhitang Chen, Yanhui Geng

Our simulation result show that the proposed adjustment significantly improves the performance of the causal direction test statistic for heterogeneous data.

Clustering Decision Making

Importance of Data Loading Pipeline in Training Deep Neural Networks

no code implementations21 Apr 2020 Mahdi Zolnouri, Xinlin Li, Vahid Partovi Nia

Training large-scale deep neural networks is a long, time-consuming operation, often requiring many GPUs to accelerate.

Data Augmentation

Random Bias Initialization Improves Quantized Training

no code implementations30 Sep 2019 Xinlin Li, Vahid Partovi Nia

Binary neural networks improve computationally efficiency of deep models with a large margin.

Random Bias Initialization Improving Binary Neural Network Training

no code implementations25 Sep 2019 Xinlin Li, Vahid Partovi Nia

Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently.

Cannot find the paper you are looking for? You can Submit a new open access paper.