Search Results for author: Matteo Leonetti

Found 14 papers, 3 papers with code

A Utility Maximization Model of Pedestrian and Driver Interactions

no code implementations21 Oct 2021 Yi-Shin Lin, Aravinda Ramakrishnan Srinivasan, Matteo Leonetti, Jac Billington, Gustav Markkula

Many models account for the traffic flow of road users but few take the details of local interactions into consideration and how they could deteriorate into safety-critical situations.

AI-HRI 2021 Proceedings

no code implementations22 Sep 2021 Reuth Mirsky, Megan Zimmerman, Muneed Ahmad, Shelly Bagchi, Felix Gervits, Zhao Han, Justin Hart, Daniel Hernández García, Matteo Leonetti, Ross Mead, Emmanuel Senft, Jivko Sinapov, Jason Wilson

In addition, acknowledging that ethics is an inherent part of the human-robot interaction, we encourage submissions of works on ethics for HRI.

Human robot interaction

Meta-Reinforcement Learning for Heuristic Planning

no code implementations6 Jul 2021 Ricardo Luna Gutierrez, Matteo Leonetti

In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of tasks to prepare for and learn faster in new, unseen, but related tasks.

Meta Reinforcement Learning

Occlusion-Aware Search for Object Retrieval in Clutter

no code implementations6 Nov 2020 Wissam Bejjani, Wisdom C. Agboh, Mehmet R. Dogar, Matteo Leonetti

Solving this task requires reasoning over the likely locations of the target object.

Information-theoretic Task Selection for Meta-Reinforcement Learning

1 code implementation NeurIPS 2020 Ricardo Luna Gutierrez, Matteo Leonetti

In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of tasks to prepare for and learn faster in new, unseen, but related tasks.

Meta Reinforcement Learning

Curriculum Learning with a Progression Function

no code implementations2 Aug 2020 Andrea Bassich, Francesco Foglino, Matteo Leonetti, Daniel Kudenko

Curriculum Learning for Reinforcement Learning is an increasingly popular technique that involves training an agent on a defined sequence of intermediate tasks, called a Curriculum, to increase the agent's performance and learning speed.

Curriculum Learning

Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey

no code implementations10 Mar 2020 Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, Peter Stone

Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.

Curriculum Learning Transfer Learning

Human-like Planning for Reaching in Cluttered Environments

1 code implementation28 Feb 2020 Mohamed Hasan, Matthew Warburton, Wisdom C. Agboh, Mehmet R. Dogar, Matteo Leonetti, He Wang, Faisal Mushtaq, Mark Mon-Williams, Anthony G. Cohn

From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles.

Decision Making Virtual Reality

A gray-box approach for curriculum learning

no code implementations17 Jun 2019 Francesco Foglino, Matteo Leonetti, Simone Sagratella, Ruggiero Seccia

Curriculum learning is often employed in deep reinforcement learning to let the agent progress more quickly towards better behaviors.

Curriculum Learning

Curriculum Learning for Cumulative Return Maximization

1 code implementation13 Jun 2019 Francesco Foglino, Christiano Coletto Christakou, Ricardo Luna Gutierrez, Matteo Leonetti

We propose a task sequencing algorithm maximizing the cumulative return, that is, the return obtained by the agent across all the learning episodes.

Combinatorial Optimization Curriculum Learning +1

An Optimization Framework for Task Sequencing in Curriculum Learning

no code implementations31 Jan 2019 Francesco Foglino, Christiano Coletto Christakou, Matteo Leonetti

In reinforcement learning, all previous task sequencing methods have shaped exploration with the objective of reducing the time to reach a given performance level.

Curriculum Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.