Distribution Matching for Multi-Task Learning of Classification Tasks: a Large-Scale Study on Faces & Beyond

2 Jan 2024  ·  Dimitrios Kollias, Viktoriia Sharmanska, Stefanos Zafeiriou ·

Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space, or parameter transfer. To provide sufficient learning support, modern MTL uses annotated data with full, or sufficiently large overlap across tasks, i.e., each input sample is annotated for all, or most of the tasks. However, collecting such annotations is prohibitive in many real applications, and cannot benefit from datasets available for individual tasks. In this work, we challenge this setup and show that MTL can be successful with classification tasks with little, or non-overlapping annotations, or when there is big discrepancy in the size of labeled data per task. We explore task-relatedness for co-annotation and co-training, and propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching. To demonstrate the general applicability of our method, we conducted diverse case studies in the domains of affective computing, face recognition, species recognition, and shopping item classification using nine datasets. Our large-scale study of affective tasks for basic expression recognition and facial action unit detection illustrates that our approach is network agnostic and brings large performance improvements compared to the state-of-the-art in both tasks and across all studied databases. In all case studies, we show that co-training via task-relatedness is advantageous and prevents negative transfer (which occurs when MT model's performance is worse than that of at least one single-task model).

PDF Abstract

Results from the Paper


 Ranked #1 on Facial Expression Recognition (FER) on AffectNet (Accuracy (7 emotion) metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Facial Expression Recognition (FER) AffectNet C MT EffNet-B2 Accuracy (7 emotion) 68.9 # 2
Facial Expression Recognition (FER) AffectNet C MT EmoAffectNet Accuracy (7 emotion) 69.4 # 1
Facial Expression Recognition (FER) RAF-DB C MT VGGFACE Avg. Accuracy 81.4 # 3
Facial Expression Recognition (FER) RAF-DB C MT PSR Avg. Accuracy 84.8 # 2

Methods


No methods listed for this paper. Add relevant methods here