Search Results for author: Andrea Baisero

Found 7 papers, 2 papers with code

A Deeper Understanding of State-Based Critics in Multi-Agent Reinforcement Learning

no code implementations3 Jan 2022 Xueguang Lyu, Andrea Baisero, Yuchen Xiao, Christopher Amato

Centralized Training for Decentralized Execution, where training is done in a centralized offline fashion, has become a popular solution paradigm in Multi-Agent Reinforcement Learning.

Multi-agent Reinforcement Learning reinforcement-learning +1

Reconciling Rewards with Predictive State Representations

1 code implementation7 Jun 2021 Andrea Baisero, Christopher Amato

We show that there is a mismatch between optimal POMDP policies and the optimal PSR policies derived from approximate rewards.

Unbiased Asymmetric Reinforcement Learning under Partial Observability

no code implementations25 May 2021 Andrea Baisero, Christopher Amato

In partially observable reinforcement learning, offline training gives access to latent information which is not available during online training and/or execution, such as the system state.

Partially Observable Reinforcement Learning reinforcement-learning +1

Active Goal Recognition

no code implementations24 Sep 2019 Christopher Amato, Andrea Baisero

We propose to combine goal recognition with other observer tasks in order to obtain \emph{active goal recognition} (AGR).

Identification of Unmodeled Objects from Symbolic Descriptions

no code implementations23 Jan 2017 Andrea Baisero, Stefan Otte, Peter Englert, Marc Toussaint

Successful human-robot cooperation hinges on each agent's ability to process and exchange information about the shared environment and the task at hand.

Ensemble Learning Object

On a Family of Decomposable Kernels on Sequences

no code implementations26 Jan 2015 Andrea Baisero, Florian T. Pokorny, Carl Henrik Ek

In many applications data is naturally presented in terms of orderings of some basic elements or symbols.

Dynamic Time Warping General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.