Self-Learning

86 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Deep Reinforcement learning for real autonomous mobile robot navigation in indoor environments

RoblabWh/RobLearn 28 May 2020

In this paper we present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.

Domain Adaptation without Source Data

youngryan1993/SFDA-Domain-Adaptation-without-Source-Data 3 Jul 2020

Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.

Learning Program Synthesis for Integer Sequences from Scratch

barakeel/oeis-synthesis 24 Feb 2022

We present a self-learning approach for synthesizing programs from integer sequences.

A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings

artetxem/vecmap ACL 2018

Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training.

Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019

Panda-Peter/visda2019-multisource 8 Oct 2019

Semi-Supervised Domain Adaptation: For this task, we adopt a standard self-learning framework to construct a classifier based on the labeled source and target data, and generate the pseudo labels for unlabeled target data.

Self-Learning Transformations for Improving Gaze and Head Redirection

swook/faze_preprocess NeurIPS 2020

Furthermore, we show that in the presence of limited amounts of real-world training data, our method allows for improvements in the downstream task of semi-supervised cross-dataset gaze estimation.

Knowledge Inheritance for Pre-trained Language Models

thunlp/Knowledge-Inheritance NAACL 2022

Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.

Transfer of Pretrained Model Weights Substantially Improves Semi-Supervised Image Classification

attaullah/Self-training 2 Sep 2021

Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training.

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

IBM/transition-amr-parser NAACL 2022

AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.

Self-Improving Safety Performance of Reinforcement Learning Based Driving with Black-Box Verification Algorithms

data-and-decision-lab/self-improving-RL 29 Oct 2022

In this work, we propose a self-improving artificial intelligence system to enhance the safety performance of reinforcement learning (RL)-based autonomous driving (AD) agents using black-box verification methods.