Search Results for author: Mohammadreza Tayaranian

Found 5 papers, 0 papers with code

Faster Inference of Integer SWIN Transformer by Removing the GELU Activation

no code implementations2 Feb 2024 Mohammadreza Tayaranian, Seyyed Hasan Mozafari, James J. Clark, Brett Meyer, Warren Gross

In this work, we improve upon the inference latency of the state-of-the-art methods by removing the floating-point operations, which are associated with the GELU activation in Swin Transformer.

Image Classification Knowledge Distillation +1

Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation

no code implementations20 Sep 2022 Mohammadreza Tayaranian, Alireza Ghaffari, Marzieh S. Tahaei, Mehdi Rezagholizadeh, Masoud Asgharian, Vahid Partovi Nia

Previously researchers were focused on lower bit-width integer data types for the forward propagation of language models to save memory and computation.

Efficient Fine-Tuning of Compressed Language Models with Learners

no code implementations3 Aug 2022 Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross

We introduce Learner modules and priming, novel methods for fine-tuning that exploit the overparameterization of pre-trained language models to gain benefits in convergence speed and resource utilization.

CoLA Navigate

Efficient Fine-Tuning of BERT Models on the Edge

no code implementations3 May 2022 Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross

FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset by 30%, and time spent on memory operations by 47%.

CoLA

Cannot find the paper you are looking for? You can Submit a new open access paper.