Auxiliary Learning
25 papers with code • 0 benchmarks • 0 datasets
Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on one or some primary tasks.
( Image credit: Self-Supervised Generalisation with Meta Auxiliary Learning )
Benchmarks
These leaderboards are used to track progress in Auxiliary Learning
Most implemented papers
Boost-RS: Boosted Embeddings for Recommender Systems and its Application to Enzyme-Substrate Interaction Prediction
We show that each of our auxiliary tasks boosts learning of the embedding vectors, and that contrastive learning using Boost-RS outperforms attribute concatenation and multi-label learning.
Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
Our experimental results show superior results to the state of the art on both UCF101 and HMDB51 datasets when pretraining on K100 in apple-to-apple comparisons.
On Exploring Pose Estimation as an Auxiliary Learning Task for Visible-Infrared Person Re-identification
Visible-infrared person re-identification (VI-ReID) has been challenging due to the existence of large discrepancies between visible and infrared modalities.
Auto-Lambda: Disentangling Dynamic Task Relationships
Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training.
Improving CTC-based speech recognition via knowledge transferring from pre-trained language models
Recently, end-to-end automatic speech recognition models based on connectionist temporal classification (CTC) have achieved impressive results, especially when fine-tuned from wav2vec2. 0 models.
Counting with Adaptive Auxiliary Learning
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
Benchmark for Uncertainty & Robustness in Self-Supervised Learning
Self-Supervised Learning (SSL) is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars.
Auxiliary Learning as an Asymmetric Bargaining Game
Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets.
Enhancing Deep Knowledge Tracing with Auxiliary Tasks
In this paper, we proposed \emph{AT-DKT} to improve the prediction performance of the original deep knowledge tracing model with two auxiliary learning tasks, i. e., \emph{question tagging (QT) prediction task} and \emph{individualized prior knowledge (IK) prediction task}.
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models
Therefore, we propose MEta Loss TRansformer (MELTR), a plug-in module that automatically and non-linearly combines various loss functions to aid learning the target task via auxiliary learning.