Modality Laziness: Everybody's Business is Nobody's Business

29 Sep 2021  ·  Chenzhuang Du, Jiaye Teng, Tingle Li, Yichen Liu, Yue Wang, Yang Yuan, Hang Zhao ·

Models fusing multiple modalities receive more information and can outperform their uni-modal counterparts. However, existing multi-modal training approaches often suffer from learning insufficient representations of each modality. We theoretically analyze this phenomenon and prove that with more modalities, the models quickly saturate and ignore the features that are hard-to-learn but important. We name this problem of multi-modal training, \emph{Modality Laziness}. The solution to this problem depends on a notion called paired feature. If there exist no paired features in the data, one may simply run independent training on each modality. Otherwise, we propose Uni-Modal Teacher (UMT), which distills the pre-trained uni-modal features to the corresponding parts in multi-modal models, as a pushing force to tackle the laziness problem. We empirically verify that we can achieve competitive performance on various multi-modal datasets in light of this dichotomy.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here