Self-Learning
98 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Self-Learning
Most implemented papers
Deep Reinforcement learning for real autonomous mobile robot navigation in indoor environments
In this paper we present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
Domain Adaptation without Source Data
Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.
Learning Program Synthesis for Integer Sequences from Scratch
We present a self-learning approach for synthesizing programs from integer sequences.
Protoformer: Embedding Prototypes for Transformers
This paper proposes Protoformer, a novel self-learning framework for Transformers that can leverage problematic samples for text classification.
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training.
Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019
Semi-Supervised Domain Adaptation: For this task, we adopt a standard self-learning framework to construct a classifier based on the labeled source and target data, and generate the pseudo labels for unlabeled target data.
Self-Learning Transformations for Improving Gaze and Head Redirection
Furthermore, we show that in the presence of limited amounts of real-world training data, our method allows for improvements in the downstream task of semi-supervised cross-dataset gaze estimation.
Knowledge Inheritance for Pre-trained Language Models
Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.
Transfer of Pretrained Model Weights Substantially Improves Semi-Supervised Image Classification
Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training.
Maximum Bayes Smatch Ensemble Distillation for AMR Parsing
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.