The ability of continual learning systems to transfer knowledge from previously seen tasks in order to maximize performance on new tasks is a significant challenge for the field, limiting the applicability of continual learning solutions to realistic scenarios.
Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks.
We introduce a new training paradigm that enforces interval constraints on neural network parameter space to control forgetting.
One of the main arguments behind studying disentangled representations is the assumption that they can be easily reused in different tasks.
To combat this, our approach uses a simple yet effective rule-based fallback layer that performs sanity checks on an ML planner's decisions (e. g. avoiding collision, assuring physical feasibility).
In this work we are the first to present an offline policy gradient method for learning imitative policies for complex urban driving from a large corpus of real-world demonstrations.
Modern generative models achieve excellent quality in a variety of tasks including image or text generation and chemical molecule modeling.
The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications.
Continual learning (CL) -- the ability to continuously learn, building on previously acquired knowledge -- is a natural requirement for long-lived autonomous reinforcement learning (RL) agents.
We develop a fast end-to-end method for training lightweight neural networks using multiple classifier heads.
We propose a semi-supervised generative model, SeGMA, which learns a joint probability distribution of data and their classes and which is implemented in a typical Wasserstein auto-encoder framework.
Motivated by the human way of memorizing images we introduce their functional representation, where an image is represented by a neural network.