1 code implementation • 11 Oct 2022 • Ling Li, David Thorsley, Joseph Hassoun
Sparse adaptive image Transformer (SaiT) offers varying levels of model acceleration by merely changing the token sparsity on the fly.
no code implementations • 29 Sep 2021 • Jun Fang, Li Yang, Chengyao Shen, Hamzah Abdel-Aziz, David Thorsley, Joseph Hassoun
In this work, we continue the effort to reduce the training cost of OFA methods.
1 code implementation • 2 Jul 2021 • Sehoon Kim, Sheng Shen, David Thorsley, Amir Gholami, Woosuk Kwon, Joseph Hassoun, Kurt Keutzer
We extensively test the performance of LTP on GLUE tasks and show that our method outperforms the prior state-of-the-art token pruning methods by up to ~2. 5% higher accuracy with the same amount of FLOPs.
3 code implementations • ECCV 2020 • Jun Fang, Ali Shafiee, Hamzah Abdel-Aziz, David Thorsley, Georgios Georgiadis, Joseph Hassoun
Quantization plays an important role in the energy-efficient deployment of deep neural networks on resource-limited devices.