Search Results for author: David Thorsley

Found 4 papers, 3 papers with code

SaiT: Sparse Vision Transformers through Adaptive Token Pruning

1 code implementation11 Oct 2022 Ling Li, David Thorsley, Joseph Hassoun

Sparse adaptive image Transformer (SaiT) offers varying levels of model acceleration by merely changing the token sparsity on the fly.

Knowledge Distillation

Learned Token Pruning for Transformers

1 code implementation2 Jul 2021 Sehoon Kim, Sheng Shen, David Thorsley, Amir Gholami, Woosuk Kwon, Joseph Hassoun, Kurt Keutzer

We extensively test the performance of LTP on GLUE tasks and show that our method outperforms the prior state-of-the-art token pruning methods by up to ~2. 5% higher accuracy with the same amount of FLOPs.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.