Search Results for author: Miguel Vasco

Found 13 papers, 6 papers with code

Can Transformers Smell Like Humans?

1 code implementation5 Nov 2024 Farzaneh Taleb, Miguel Vasco, Antônio H. Ribeiro, Mårten Björkman, Danica Kragic

The human brain encodes stimuli from the environment into representations that form a sensory perception of the world.

Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks

no code implementations2 Oct 2024 Alfredo Reichlin, Gustaf Tegnér, Miguel Vasco, Hang Yin, Mårten Björkman, Danica Kragic

Given a finite set of sample points, meta-learning algorithms aim to learn an optimal adaptation strategy for new, unseen tasks.

Meta-Learning regression

A Super-human Vision-based Reinforcement Learning Agent for Autonomous Racing in Gran Turismo

no code implementations18 Jun 2024 Miguel Vasco, Takuma Seno, Kenta Kawamoto, Kaushik Subramanian, Peter R. Wurman, Peter Stone

Racing autonomous cars faster than the best human drivers has been a longstanding grand challenge for the fields of Artificial Intelligence and robotics.

Autonomous Racing Car Racing +1

NeuralSolver: Learning Algorithms For Consistent and Efficient Extrapolation Across General Tasks

no code implementations23 Feb 2024 Bernardo Esteves, Miguel Vasco, Francisco S. Melo

We contribute NeuralSolver, a novel recurrent solver that can efficiently and consistently extrapolate, i. e., learn algorithms from smaller problems (in terms of observation size) and execute those algorithms in large problems.

Learning Goal-Conditioned Policies from Sub-Optimal Offline Data via Metric Learning

no code implementations16 Feb 2024 Alfredo Reichlin, Miguel Vasco, Hang Yin, Danica Kragic

We use the proposed value function to guide the learning of a policy in an actor-critic fashion, a method we name MetricRL.

Metric Learning Offline RL +1

Centralized Training with Hybrid Execution in Multi-Agent Reinforcement Learning

1 code implementation12 Oct 2022 Pedro P. Santos, Diogo S. Carvalho, Miguel Vasco, Alberto Sardinha, Pedro A. Santos, Ana Paiva, Francisco S. Melo

We introduce hybrid execution in multi-agent reinforcement learning (MARL), a new paradigm in which agents aim to successfully complete cooperative tasks with arbitrary communication levels at execution time by taking advantage of information-sharing among the agents.

Multi-agent Reinforcement Learning reinforcement-learning +2

Perceive, Represent, Generate: Translating Multimodal Information to Robotic Motion Trajectories

no code implementations6 Apr 2022 Fábio Vital, Miguel Vasco, Alberto Sardinha, Francisco Melo

We present Perceive-Represent-Generate (PRG), a novel three-stage framework that maps perceptual information of different modalities (e. g., visual or sound), corresponding to a sequence of instructions, to an adequate sequence of movements to be executed by a robot.

Geometric Multimodal Contrastive Representation Learning

1 code implementation7 Feb 2022 Petra Poklukar, Miguel Vasco, Hang Yin, Francisco S. Melo, Ana Paiva, Danica Kragic

Learning representations of multimodal data that are both informative and robust to missing modalities at test time remains a challenging problem due to the inherent heterogeneity of data obtained from different channels.

Reinforcement Learning (RL) Representation Learning

How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents

1 code implementation7 Oct 2021 Miguel Vasco, Hang Yin, Francisco S. Melo, Ana Paiva

This work addresses the problem of sensing the world: how to learn a multimodal representation of a reinforcement learning agent's environment that allows the execution of tasks under incomplete perceptual conditions.

Atari Games Deep Reinforcement Learning +2

Explainable Agency by Revealing Suboptimality in Child-Robot Learning Scenarios

1 code implementation6 Nov 2020 Silvia Tulli, Marta Couto, Miguel Vasco, Elmira Yadollahi, Francisco Melo, Ana Paiva

In the application scenario, the child and the robot learn together how to play a zero-sum game that requires logical and mathematical thinking.

Explanation Generation

Playing Games in the Dark: An approach for cross-modality transfer in reinforcement learning

1 code implementation28 Nov 2019 Rui Silva, Miguel Vasco, Francisco S. Melo, Ana Paiva, Manuela Veloso

In this work we explore the use of latent representations obtained from multiple input sensory modalities (such as images or sounds) in allowing an agent to learn and exploit policies over different subsets of input modalities.

OpenAI Gym reinforcement-learning +2

Learning multimodal representations for sample-efficient recognition of human actions

no code implementations6 Mar 2019 Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura

In this work we present \textit{motion concepts}, a novel multimodal representation of human actions in a household environment.

Cannot find the paper you are looking for? You can Submit a new open access paper.