no code implementations • 20 Jan 2024 • Ruiqi Xu, Tianchi Zhang
Although the computing power of mobile devices is increasing, machine learning models are also growing in size.
no code implementations • 10 Dec 2023 • Jianwei Li, Tianchi Zhang, Ian En-Hsu Yen, Dongkuan Xu
Transformer-based models, such as BERT, have been widely applied in a wide range of natural language processing tasks.
1 code implementation • CVPR 2023 • Shengkun Tang, Yaqing Wang, Zhenglun Kong, Tianchi Zhang, Yao Li, Caiwen Ding, Yanzhi Wang, Yi Liang, Dongkuan Xu
To handle this challenge, we propose a novel early exiting strategy for unified visual language models, which allows dynamically skip the layers in encoder and decoder simultaneously in term of input layer-wise similarities with multiple times of early exiting, namely \textbf{MuE}.