Search Results for author: Roberto Calandra

Found 46 papers, 24 papers with code

A Touch, Vision, and Language Dataset for Multimodal Alignment

1 code implementation20 Feb 2024 Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, Ken Goldberg

This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language descriptions.

Language Modelling Text Generation

Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation

no code implementations2 Dec 2023 Niklas Funk, Erik Helmut, Georgia Chalvatzaki, Roberto Calandra, Jan Peters

To overcome this shortcoming, we study the idea of replacing the RGB camera with an event-based camera and introduce a new event-based optical tactile sensor called Evetac.

Benchmarking

The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback

no code implementations31 Oct 2023 Nathan Lambert, Roberto Calandra

Reinforcement learning from human feedback (RLHF) has emerged as a powerful technique to make large language models (LLMs) more capable in complex settings.

GSM8K Model-based Reinforcement Learning +1

A Unified View on Solving Objective Mismatch in Model-Based Reinforcement Learning

1 code implementation10 Oct 2023 Ran Wei, Nathan Lambert, Anthony McDonald, Alfredo Garcia, Roberto Calandra

Model-based Reinforcement Learning (MBRL) aims to make agents more sample-efficient, adaptive, and explainable by learning an explicit model of the environment.

Model-based Reinforcement Learning

General In-Hand Object Rotation with Vision and Touch

no code implementations18 Sep 2023 Haozhi Qi, Brent Yi, Sudharshan Suresh, Mike Lambeta, Yi Ma, Roberto Calandra, Jitendra Malik

We introduce RotateIt, a system that enables fingertip-based object rotation along multiple axes by leveraging multimodal sensory inputs.

Object

Investigating Compounding Prediction Errors in Learned Dynamics Models

no code implementations17 Mar 2022 Nathan Lambert, Kristofer Pister, Roberto Calandra

In this paper, we explore the effects of subcomponents of a control problem on long term prediction error: including choosing a system, collecting data, and training a model.

Model-based Reinforcement Learning

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

no code implementations11 Jan 2022 Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer

The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents.

AutoML Meta-Learning +2

What Robot do I Need? Fast Co-Adaptation of Morphology and Control using Graph Neural Networks

no code implementations3 Nov 2021 Kevin Sebastian Luck, Roberto Calandra, Michael Mistry

The co-adaptation of robot morphology and behaviour becomes increasingly important with the advent of fast 3D-manufacturing methods and efficient deep reinforcement learning algorithms.

reinforcement-learning Reinforcement Learning (RL)

Active 3D Shape Reconstruction from Vision and Touch

2 code implementations NeurIPS 2021 Edward J. Smith, David Meger, Luis Pineda, Roberto Calandra, Jitendra Malik, Adriana Romero, Michal Drozdzal

In this paper, we focus on this problem and introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2)a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile signals; and 3) a set of data-driven solutions with either tactile or visuotactile priors to guide the shape exploration.

3D Reconstruction 3D Shape Reconstruction

PyTouch: A Machine Learning Library for Touch Processing

1 code implementation26 May 2021 Mike Lambeta, Huazhe Xu, Jingwei Xu, Po-Wei Chou, Shaoxiong Wang, Trevor Darrell, Roberto Calandra

With the increased availability of rich tactile sensors, there is an equally proportional need for open-source and integrated software capable of efficiently and effectively processing raw touch measurements into high-level signals that can be used for control and decision-making.

BIG-bench Machine Learning Decision Making +1

MBRL-Lib: A Modular Library for Model-based Reinforcement Learning

3 code implementations20 Apr 2021 Luis Pineda, Brandon Amos, Amy Zhang, Nathan O. Lambert, Roberto Calandra

MBRL-Lib is designed as a platform for both researchers, to easily develop, debug and compare new algorithms, and non-expert user, to lower the entry-bar of deploying state-of-the-art algorithms.

Model-based Reinforcement Learning reinforcement-learning +1

On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning

1 code implementation26 Feb 2021 Baohe Zhang, Raghu Rajan, Luis Pineda, Nathan Lambert, André Biedenkapp, Kurtland Chua, Frank Hutter, Roberto Calandra

We demonstrate that this problem can be tackled effectively with automated HPO, which we demonstrate to yield significantly improved performance compared to human experts.

Hyperparameter Optimization Model-based Reinforcement Learning +2

Invariant Representations for Reinforcement Learning without Reconstruction

no code implementations ICLR 2021 Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

Causal Inference reinforcement-learning +2

TACTO: A Fast, Flexible, and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

1 code implementation15 Dec 2020 Shaoxiong Wang, Mike Lambeta, Po-Wei Chou, Roberto Calandra

We believe that TACTO is a step towards the widespread adoption of touch sensing in robotic applications, and to enable machine learning practitioners interested in multi-modal learning and control.

Benchmarking

Active MR k-space Sampling with Reinforcement Learning

2 code implementations20 Jul 2020 Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, Michal Drozdzal

Deep learning approaches have recently shown great promise in accelerating magnetic resonance image (MRI) acquisition.

Image Reconstruction reinforcement-learning +1

3D Shape Reconstruction from Vision and Touch

1 code implementation NeurIPS 2020 Edward J. Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal

When a toddler is presented a new toy, their instinctual behaviour is to pick it upand inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with.

3D Shape Reconstruction

Learning Invariant Representations for Reinforcement Learning without Reconstruction

2 code implementations18 Jun 2020 Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

Causal Inference reinforcement-learning +2

Learning to Play Table Tennis From Scratch using Muscular Robots

no code implementations10 Jun 2020 Dieter Büchler, Simon Guist, Roberto Calandra, Vincent Berenz, Bernhard Schölkopf, Jan Peters

This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls.

reinforcement-learning Reinforcement Learning (RL)

Plan2Vec: Unsupervised Representation Learning by Latent Plans

1 code implementation7 May 2020 Ge Yang, Amy Zhang, Ari S. Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra

In this paper we introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.

Motion Planning reinforcement-learning +2

Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads

2 code implementations23 Apr 2020 Suneel Belkhale, Rachel Li, Gregory Kahn, Rowan Mcallister, Roberto Calandra, Sergey Levine

Our experiments demonstrate that our online adaptation approach outperforms non-adaptive methods on a series of challenging suspended payload transportation tasks.

Meta-Learning Meta Reinforcement Learning +2

Adversarial Continual Learning

1 code implementation ECCV 2020 Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, Marcus Rohrbach

We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks.

Continual Learning Image Classification

OmniTact: A Multi-Directional High Resolution Touch Sensor

1 code implementation16 Mar 2020 Akhil Padmanabha, Frederik Ebert, Stephen Tian, Roberto Calandra, Chelsea Finn, Sergey Levine

We compare with a state-of-the-art tactile sensor that is only sensitive on one side, as well as a state-of-the-art multi-directional tactile sensor, and find that OmniTact's combination of high-resolution and multi-directional sensing is crucial for reliably inserting the electrical connector and allows for higher accuracy in the state estimation task.

Vocal Bursts Intensity Prediction

Objective Mismatch in Model-based Reinforcement Learning

2 code implementations ICLR 2020 Nathan Lambert, Brandon Amos, Omry Yadan, Roberto Calandra

In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of one-step ahead predictions is not always correlated with control performance.

Model-based Reinforcement Learning reinforcement-learning +1

Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization

1 code implementation NeurIPS 2020 Benjamin Letham, Roberto Calandra, Akshara Rai, Eytan Bakshy

We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO on a range of problems, including learning a gait policy for robot locomotion.

Bayesian Optimization Misconceptions +1

Data-efficient Co-Adaptation of Morphology and Behaviour with Deep Reinforcement Learning

no code implementations15 Nov 2019 Kevin Sebastian Luck, Heni Ben Amor, Roberto Calandra

Key to our approach is the possibility of leveraging previously tested morphologies and behaviors to estimate the performance of new candidate morphologies.

reinforcement-learning Reinforcement Learning (RL)

Data-efficient Learning of Morphology and Controller for a Microrobot

1 code implementation3 May 2019 Thomas Liao, Grant Wang, Brian Yang, Rene Lee, Kristofer Pister, Sergey Levine, Roberto Calandra

Robot design is often a slow and difficult process requiring the iterative construction and testing of prototypes, with the goal of sequentially optimizing the design.

Bayesian Optimization

Manipulation by Feel: Touch-Based Control with Deep Predictive Models

no code implementations11 Mar 2019 Stephen Tian, Frederik Ebert, Dinesh Jayaraman, Mayur Mudigonda, Chelsea Finn, Roberto Calandra, Sergey Levine

Touch sensing is widely acknowledged to be important for dexterous robotic manipulation, but exploiting tactile sensing for continuous, non-prehensile manipulation is challenging.

Learning to Identify Object Instances by Touch: Tactile Recognition via Multimodal Matching

no code implementations8 Mar 2019 Justin Lin, Roberto Calandra, Sergey Levine

We propose a novel framing of the problem as multi-modal recognition: the goal of our system is to recognize, given a visual and tactile observation, whether or not these observations correspond to the same object.

Object

Fast Neural Network Verification via Shadow Prices

no code implementations19 Feb 2019 Vicenc Rubies-Royo, Roberto Calandra, Dusan M. Stipanovic, Claire Tomlin

To use neural networks in safety-critical settings it is paramount to provide assurances on their runtime operation.

Collision Avoidance

Low Level Control of a Quadrotor with Deep Model-Based Reinforcement Learning

no code implementations11 Jan 2019 Nathan O. Lambert, Daniel S. Drew, Joseph Yaconelli, Roberto Calandra, Sergey Levine, Kristofer S. J. Pister

Designing effective low-level robot controllers often entail platform-specific implementations that require manual heuristic parameter tuning, significant system knowledge, or long design times.

Model-based Reinforcement Learning reinforcement-learning +1

Unsupervised Exploration with Deep Model-Based Reinforcement Learning

no code implementations27 Sep 2018 Kurtland Chua, Rowan Mcallister, Roberto Calandra, Sergey Levine

We show that both challenges can be addressed by representing model-uncertainty, which can both guide exploration in the unsupervised phase and ensure that the errors in the model are not exploited by the planner in the goal-directed phase.

Model-based Reinforcement Learning reinforcement-learning +1

Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models

10 code implementations NeurIPS 2018 Kurtland Chua, Roberto Calandra, Rowan Mcallister, Sergey Levine

Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance.

Model-based Reinforcement Learning reinforcement-learning +1

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

no code implementations28 May 2018 Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine

This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions.

Robotic Grasping

Learning Flexible and Reusable Locomotion Primitives for a Microrobot

no code implementations1 Mar 2018 Brian Yang, Grant Wang, Roberto Calandra, Daniel Contreras, Sergey Levine, Kristofer Pister

This approach formalizes locomotion as a contextual policy search task to collect data, and subsequently uses that data to learn multi-objective locomotion primitives that can be used for planning.

Navigate

The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?

1 code implementation16 Oct 2017 Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine

In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch.

Industrial Robots Robotic Grasping

Goal-Driven Dynamics Learning via Bayesian Optimization

no code implementations27 Mar 2017 Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, Claire J. Tomlin

Real-world robots are becoming increasingly complex and commonly act in poorly understood environments where it is extremely challenging to model or learn their true dynamics.

Active Learning Bayesian Optimization

Manifold Gaussian Processes for Regression

1 code implementation24 Feb 2014 Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth

This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.

Gaussian Processes regression

Cannot find the paper you are looking for? You can Submit a new open access paper.