Search Results for author: Katja Hofmann

Found 45 papers, 20 papers with code

Trajectory VAE for multi-modal imitation

no code implementations ICLR 2019 Xiaoyu Lu, Jan Stuehmer, Katja Hofmann

In this paper, we use a generative model to capture different emergent playstyles in an unsupervised manner, enabling the imitation of a diverse range of distinct behaviours.

Continuous Control Decision Making +2

Strategically Efficient Exploration in Competitive Multi-agent Reinforcement Learning

1 code implementation30 Jul 2021 Robert Loftin, Aadirupa Saha, Sam Devlin, Katja Hofmann

High sample complexity remains a barrier to the application of reinforcement learning (RL), particularly in multi-agent systems.

Efficient Exploration Multi-agent Reinforcement Learning

SocialAI: Benchmarking Socio-Cognitive Abilities in Deep Reinforcement Learning Agents

no code implementations2 Jul 2021 Grgur Kovač, Rémy Portelas, Katja Hofmann, Pierre-Yves Oudeyer

In this paper, we argue that aiming towards human-level AI requires a broader set of key social skills: 1) language use in complex and variable social contexts; 2) beyond language, complex embodied communication in multimodal settings within constantly evolving social worlds.

Memory Efficient Meta-Learning with Large Images

2 code implementations NeurIPS 2021 John Bronskill, Daniela Massiceti, Massimiliano Patacchiola, Katja Hofmann, Sebastian Nowozin, Richard E. Turner

This limitation arises because a task's entire support set, which can contain up to 1000 images, must be processed before an optimization step can be taken.

Meta-Learning Transfer Learning

Grounding Spatio-Temporal Language with Transformers

1 code implementation NeurIPS 2021 Tristan Karch, Laetitia Teodorescu, Katja Hofmann, Clément Moulin-Frier, Pierre-Yves Oudeyer

While there is an extended literature studying how machines can learn grounded language, the topic of how to learn spatio-temporal linguistic concepts is still largely uncharted.

Navigation Turing Test (NTT): Learning to Evaluate Human-Like Navigation

1 code implementation20 May 2021 Sam Devlin, Raluca Georgescu, Ida Momennejad, Jaroslaw Rzepecki, Evelyn Zuniga, Gavin Costello, Guy Leroy, Ali Shaw, Katja Hofmann

A key challenge on the path to developing agents that learn complex human-like behavior is the need to quickly and accurately quantify human-likeness.

SocialAI 0.1: Towards a Benchmark to Stimulate Research on Socio-Cognitive Abilities in Deep Reinforcement Learning Agents

no code implementations27 Apr 2021 Grgur Kovač, Rémy Portelas, Katja Hofmann, Pierre-Yves Oudeyer

Building embodied autonomous agents capable of participating in social interactions with humans is one of the main challenges in AI.

ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition

1 code implementation ICCV 2021 Daniela Massiceti, Luisa Zintgraf, John Bronskill, Lida Theodorou, Matthew Tobias Harris, Edward Cutrell, Cecily Morrison, Katja Hofmann, Simone Stumpf

To close this gap, we present the ORBIT dataset and benchmark, grounded in the real-world application of teachable object recognizers for people who are blind/low-vision.

Few-Shot Learning Object Recognition

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL

1 code implementation17 Mar 2021 Clément Romac, Rémy Portelas, Katja Hofmann, Pierre-Yves Oudeyer

Training autonomous agents able to generalize to multiple tasks is a key target of Deep Reinforcement Learning (DRL) research.

Curriculum Learning

Evaluating the Robustness of Collaborative Agents

1 code implementation14 Jan 2021 Paul Knott, Micah Carroll, Sam Devlin, Kamil Ciosek, Katja Hofmann, A. D. Dragan, Rohin Shah

We apply this methodology to build a suite of unit tests for the Overcooked-AI environment, and use this test suite to evaluate three proposals for improving robustness.

Meta Automatic Curriculum Learning

no code implementations16 Nov 2020 Rémy Portelas, Clément Romac, Katja Hofmann, Pierre-Yves Oudeyer

In such complex task spaces, it is essential to rely on some form of Automatic Curriculum Learning (ACL) to adapt the task sampling distribution to a given learning agent, instead of randomly sampling tasks, as many could end up being either trivial or unfeasible.

Curriculum Learning

Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

1 code implementation2 Oct 2020 Luisa Zintgraf, Leo Feng, Cong Lu, Maximilian Igl, Kristian Hartikainen, Katja Hofmann, Shimon Whiteson

To rapidly learn a new task, it is often essential for agents to explore efficiently -- especially when performance matters from the first timestep.

Meta-Learning Meta Reinforcement Learning

"It's Unwieldy and It Takes a Lot of Time." Challenges and Opportunities for Creating Agents in Commercial Games

no code implementations1 Sep 2020 Mikhail Jacob, Sam Devlin, Katja Hofmann

We compare with literature from the research community that address the challenges identified and conclude by highlighting promising directions for future research supporting agent creation in the games industry.

Guaranteeing Reproducibility in Deep Learning Competitions

no code implementations12 May 2020 Brandon Houghton, Stephanie Milani, Nicholay Topin, William Guss, Katja Hofmann, Diego Perez-Liebana, Manuela Veloso, Ruslan Salakhutdinov

To encourage the development of methods with reproducible and robust training behavior, we propose a challenge paradigm where competitors are evaluated directly on the performance of their learning procedures rather than pre-trained agents.

AMRL: Aggregated Memory For Reinforcement Learning

no code implementations ICLR 2020 Jacob Beck, Kamil Ciosek, Sam Devlin, Sebastian Tschiatschek, Cheng Zhang, Katja Hofmann

In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy.


SpatialSim: Recognizing Spatial Configurations of Objects with Graph Neural Networks

no code implementations9 Apr 2020 Laetitia Teodorescu, Katja Hofmann, Pierre-Yves Oudeyer

Recognizing precise geometrical configurations of groups of objects is a key capability of human spatial cognition, yet little studied in the deep learning literature so far.

Trying AGAIN instead of Trying Longer: Prior Learning for Automatic Curriculum Learning

no code implementations7 Apr 2020 Rémy Portelas, Katja Hofmann, Pierre-Yves Oudeyer

A major challenge in the Deep RL (DRL) community is to train agents able to generalize over unseen situations, which is often approached by training them on a diversity of tasks (or environments).

Curriculum Learning

Automatic Curriculum Learning For Deep RL: A Short Survey

no code implementations10 Mar 2020 Rémy Portelas, Cédric Colas, Lilian Weng, Katja Hofmann, Pierre-Yves Oudeyer

Automatic Curriculum Learning (ACL) has become a cornerstone of recent successes in Deep Reinforcement Learning (DRL). These methods shape the learning trajectories of agents by challenging them with tasks adapted to their capacities.

Curriculum Learning

Better Exploration with Optimistic Actor Critic

1 code implementation NeurIPS 2019 Kamil Ciosek, Quan Vuong, Robert Loftin, Katja Hofmann

To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function.

Continuous Control Efficient Exploration

Better Exploration with Optimistic Actor-Critic

no code implementations28 Oct 2019 Kamil Ciosek, Quan Vuong, Robert Loftin, Katja Hofmann

To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function.

Continuous Control Efficient Exploration

Variational Integrator Networks for Physically Structured Embeddings

1 code implementation21 Oct 2019 Steindor Saemundsson, Alexander Terenin, Katja Hofmann, Marc Peter Deisenroth

Learning workable representations of dynamical systems is becoming an increasingly important problem in a number of application areas.

Teacher algorithms for curriculum learning of Deep RL in continuously parameterized environments

1 code implementation16 Oct 2019 Rémy Portelas, Cédric Colas, Katja Hofmann, Pierre-Yves Oudeyer

We consider the problem of how a teacher algorithm can enable an unknown Deep Reinforcement Learning (DRL) student to become good at a skill over a wide range of diverse environments.

Curriculum Learning

Combining No-regret and Q-learning

1 code implementation7 Oct 2019 Ian A. Kash, Michael Sullins, Katja Hofmann

Counterfactual Regret Minimization (CFR) has found success in settings like poker which have both terminal states and perfect recall.


Near-Optimal Online Egalitarian learning in General Sum Repeated Matrix Games

no code implementations4 Jun 2019 Aristide Tossou, Christos Dimitrakakis, Jaroslaw Rzepecki, Katja Hofmann

We study two-player general sum repeated finite games where the rewards of each player are generated from an unknown distribution.

The MineRL 2019 Competition on Sample Efficient Reinforcement Learning using Human Priors

1 code implementation22 Apr 2019 William H. Guss, Cayden Codel, Katja Hofmann, Brandon Houghton, Noboru Kuno, Stephanie Milani, Sharada Mohanty, Diego Perez Liebana, Ruslan Salakhutdinov, Nicholay Topin, Manuela Veloso, Phillip Wang

To that end, we introduce: (1) the Minecraft ObtainDiamond task, a sequential decision making environment requiring long-term planning, hierarchical control, and efficient exploration methods; and (2) the MineRL-v0 dataset, a large-scale collection of over 60 million state-action pairs of human demonstrations that can be resimulated into embodied trajectories with arbitrary modifications to game state and visuals.

Decision Making Efficient Exploration +1

The Multi-Agent Reinforcement Learning in MalmÖ (MARLÖ) Competition

2 code implementations23 Jan 2019 Diego Perez-Liebana, Katja Hofmann, Sharada Prasanna Mohanty, Noburu Kuno, Andre Kramer, Sam Devlin, Raluca D. Gaina, Daniel Ionita

Learning in multi-agent scenarios is a fruitful research direction, but current approaches still show scalability problems in multiple games with general reward settings and different opponent types.

Multi-agent Reinforcement Learning

Fast Context Adaptation via Meta-Learning

1 code implementation8 Oct 2018 Luisa M. Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson

We propose CAVIA for meta-learning, a simple extension to MAML that is less prone to meta-overfitting, easier to parallelise, and more interpretable.

General Classification Meta-Learning

CAML: Fast Context Adaptation via Meta-Learning

no code implementations27 Sep 2018 Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson

We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks.


Depth and nonlinearity induce implicit exploration for RL

no code implementations29 May 2018 Justas Dauparas, Ryota Tomioka, Katja Hofmann

The question of how to explore, i. e., take actions with uncertain outcomes to learn about possible future rewards, is a key question in reinforcement learning (RL).


Variational Inference for Data-Efficient Model Learning in POMDPs

no code implementations23 May 2018 Sebastian Tschiatschek, Kai Arulkumaran, Jan Stühmer, Katja Hofmann

In this paper we propose DELIP, an approach to model learning for POMDPs that utilizes amortized structured variational inference.

Decision Making Decision Making Under Uncertainty +1

Cross Domain Regularization for Neural Ranking Models Using Adversarial Learning

no code implementations9 May 2018 Daniel Cohen, Bhaskar Mitra, Katja Hofmann, W. Bruce Croft

We use an adversarial discriminator and train our neural ranking model on a small set of domains.

Information Retrieval

Meta Reinforcement Learning with Latent Variable Gaussian Processes

no code implementations20 Mar 2018 Steindór Sæmundsson, Katja Hofmann, Marc Peter Deisenroth

Learning from small data sets is critical in many practical applications where data collection is time consuming or expensive, e. g., robotics, animal experiments or drug design.

Gaussian Processes Meta-Learning +3

The Atari Grand Challenge Dataset

2 code implementations31 May 2017 Vitaly Kurin, Sebastian Nowozin, Katja Hofmann, Lucas Beyer, Bastian Leibe

Recent progress in Reinforcement Learning (RL), fueled by its combination, with Deep Learning has enabled impressive results in learning to interact with complex virtual environments, yet real-world applications of RL are still scarce.

Imitation Learning

A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games

no code implementations21 Nov 2016 Felix Leibfried, Nate Kushman, Katja Hofmann

Reinforcement learning is concerned with identifying reward-maximizing behaviour policies in environments that are initially unknown.

Atari Games Model-based Reinforcement Learning

Memory Lens: How Much Memory Does an Agent Use?

no code implementations21 Nov 2016 Christoph Dann, Katja Hofmann, Sebastian Nowozin

The study of memory as information that flows from the past to the current action opens avenues to understand and improve successful reinforcement learning algorithms.

Experimental and causal view on information integration in autonomous agents

no code implementations14 Jun 2016 Philipp Geiger, Katja Hofmann, Bernhard Schölkopf

The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it.

Decision Making Self-Driving Cars +1

Contextual Dueling Bandits

no code implementations23 Feb 2015 Miroslav Dudík, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins, Masrour Zoghi

The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space.

Cannot find the paper you are looking for? You can Submit a new open access paper.