Towards Efficient and Elastic Visual Question Answering with Doubly Slimmable Transformer

24 Mar 2022  ·  Zhou Yu, Zitian Jin, Jun Yu, Mingliang Xu, Jianping Fan ·

Transformer-based approaches have shown great success in visual question answering (VQA). However, they usually require deep and wide models to guarantee good performance, making it difficult to deploy on capacity-restricted platforms. It is a challenging yet valuable task to design an elastic VQA model that supports adaptive pruning at runtime to meet the efficiency constraints of diverse platforms. In this paper, we present the Doubly Slimmable Transformer (DST), a general framework that can be seamlessly integrated into arbitrary Transformer-based VQA models to train one single model once and obtain various slimmed submodels of different widths and depths. Taking two typical Transformer-based VQA approaches, i.e., MCAN and UNITER, as the reference models, the obtained slimmable MCAN_DST and UNITER_DST models outperform the state-of-the-art methods trained independently on two benchmark datasets. In particular, one slimmed MCAN_DST submodel achieves a comparable accuracy on VQA-v2, while being 0.38x smaller in model size and having 0.27x fewer FLOPs than the reference MCAN model. The smallest MCAN_DST submodel has 9M parameters and 0.16G FLOPs in the inference stage, making it possible to be deployed on edge devices.

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.