TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?

21 Jun 2021  ·  Michael S. Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, Anelia Angelova ·

In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sampled patches for attention, our approach learns to mine important tokens in visual data. This results in efficiently and effectively finding a few important visual tokens and enables modeling of pairwise attention between such tokens, over a longer temporal horizon for videos, or the spatial content in images. Our experiments demonstrate strong performance on several challenging benchmarks for both image and video recognition tasks. Importantly, due to our tokens being adaptive, we accomplish competitive results at significantly reduced compute amount. We obtain comparable results to the state-of-the-arts on ImageNet while being computationally more efficient. We also confirm the effectiveness of the approach on multiple video datasets, including Kinetics-400, Kinetics-600, Charades, and AViD. The code is available at:

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Classification AViD TokenLearner Accuracy 53.8 # 1
Action Classification Charades TokenLearner MAP 66.3 # 1
Image Classification ImageNet TokenLearner L/8 (24+11) Top 1 Accuracy 88.87% # 33
Number of params 460M # 885
Image Classification ImageNet 16-TokenLearner B/16 (21) Top 1 Accuracy 87.07% # 103
Image Classification ImageNet ReaL TokenLearner L/8 (24+11) Accuracy 91.05% # 5
Params 460M # 48
Action Classification Kinetics-400 TokenLearner 16at18 (L/10) Acc@1 85.4 # 39
Action Classification Kinetics-600 TokenLearner 16at18 w. Fuser (L/10) Top-1 Accuracy 86.3 # 25
Top-5 Accuracy 97.0 # 19


No methods listed for this paper. Add relevant methods here