Search Results for author: Kristian Hartikainen

Found 11 papers, 4 papers with code

Bayesian Bellman Operators

no code implementations NeurIPS 2021 Matthew Fellows, Kristian Hartikainen, Shimon Whiteson

We introduce a novel perspective on Bayesian reinforcement learning (RL); whereas existing approaches infer a posterior over the transition distribution or Q-function, we characterise the uncertainty in the Bellman operator.

Continuous Control Reinforcement Learning (RL)

Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

1 code implementation2 Oct 2020 Luisa Zintgraf, Leo Feng, Cong Lu, Maximilian Igl, Kristian Hartikainen, Katja Hofmann, Shimon Whiteson

To rapidly learn a new task, it is often essential for agents to explore efficiently -- especially when performance matters from the first timestep.

Meta-Learning Meta Reinforcement Learning +2

The Ingredients of Real World Robotic Reinforcement Learning

no code implementations ICLR 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.

reinforcement-learning Reinforcement Learning (RL)

The Ingredients of Real-World Robotic Reinforcement Learning

no code implementations27 Apr 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

In this work, we discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.

reinforcement-learning Reinforcement Learning (RL)

ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots

1 code implementation25 Sep 2019 Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar

ROBEL introduces two robots, each aimed to accelerate reinforcement learning research in different task domains: D'Claw is a three-fingered hand robot that facilitates learning dexterous manipulation tasks, and D'Kitty is a four-legged robot that facilitates learning agile legged locomotion tasks.

Continuous Control reinforcement-learning +1

Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery

no code implementations ICLR 2020 Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine

We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples.

reinforcement-learning Reinforcement Learning (RL)

End-to-End Robotic Reinforcement Learning without Reward Engineering

3 code implementations16 Apr 2019 Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, Sergey Levine

In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task.

reinforcement-learning Reinforcement Learning (RL)

Latent Space Policies for Hierarchical Reinforcement Learning

no code implementations ICML 2018 Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, Sergey Levine

In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective.

Hierarchical Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.