Search Results for author: Matthew R. Walter

Found 27 papers, 12 papers with code

NeRFuser: Large-Scale Scene Representation by NeRF Fusion

1 code implementation22 May 2023 Jiading Fang, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Rares Ambrus, Adrien Gaidon, Gregory Shakhnarovich, Matthew R. Walter

A practical benefit of implicit visual representations like Neural Radiance Fields (NeRFs) is their memory efficiency: large scenes can be efficiently stored and shared as small neural nets instead of collections of images.

Invariance Through Latent Alignment

no code implementations15 Dec 2021 Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly Stadie

A robot's deployment environment often involves perceptual changes that differ from what it has experienced during training.

Data Augmentation

Self-Supervised Camera Self-Calibration from Video

no code implementations6 Dec 2021 Jiading Fang, Igor Vasiljevic, Vitor Guizilini, Rares Ambrus, Greg Shakhnarovich, Adrien Gaidon, Matthew R. Walter

Camera calibration is integral to robotics and computer vision algorithms that seek to infer geometric properties of the scene from visual input streams.

Autonomous Vehicles Camera Calibration +2

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

1 code implementation23 Nov 2020 Tri Huynh, Simon Kornblith, Matthew R. Walter, Michael Maire, Maryam Khademi

While positive pairs can be generated reliably (e. g., as different views of the same image), it is difficult to accurately establish negative pairs, defined as samples from different images regardless of their semantic content or visual features.

Contrastive Learning Representation Learning +3

Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman

no code implementations ICML Workshop LaReL 2020 Takuma Yoneda, Matthew R. Walter, Jason Naradowsky

In this work we perform a controlled study of human language use in a competitive team-based game, and search for useful lessons for structuring communication protocol between autonomous agents.

Concurrent Training Improves the Performance of Behavioral Cloning from Observation

no code implementations3 Aug 2020 Zachary W. Robertson, Matthew R. Walter

In contrast, learning from observation offers a way to utilize unlabeled demonstrations (e. g., video) to perform imitation learning.

Imitation Learning

Residual Policy Learning for Shared Autonomy

1 code implementation Proceedings of Robotics: Science and Systems (RSS) 2020 Charles Schaff, Matthew R. Walter

Shared autonomy provides an effective framework for human-robot collaboration that takes advantage of the complementary strengths of humans and robots to achieve common goals.


Loop Estimator for Discounted Values in Markov Reward Processes

1 code implementation15 Feb 2020 Falcon Z. Dai, Matthew R. Walter

At the working heart of policy iteration algorithms commonly used and studied in the discounted setting of reinforcement learning, the policy evaluation step estimates the value of states with samples from a Markov reward process induced by following a Markov policy in a Markov decision process.

Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments

no code implementations22 Oct 2019 Siddharth Patki, Ethan Fahnestock, Thomas M. Howard, Matthew R. Walter

Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions.

Instruction Following

DIODE: A Dense Indoor and Outdoor DEpth Dataset

1 code implementation1 Aug 2019 Igor Vasiljevic, Nick Kolkin, Shanyi Zhang, Ruotian Luo, Haochen Wang, Falcon Z. Dai, Andrea F. Daniele, Mohammadreza Mostajabi, Steven Basart, Matthew R. Walter, Gregory Shakhnarovich

We introduce DIODE, a dataset that contains thousands of diverse high resolution color images with accurate, dense, long-range depth measurements.

Maximum Expected Hitting Cost of a Markov Decision Process and Informativeness of Rewards

no code implementations3 Jul 2019 Falcon Z. Dai, Matthew R. Walter

We propose a new complexity measure for Markov decision processes (MDPs), the maximum expected hitting cost (MEHC).


Multigrid Neural Memory

1 code implementation ICML 2020 Tri Huynh, Michael Maire, Matthew R. Walter

We introduce a novel approach to endowing neural networks with emergent, long-term, large-scale memory.

Question Answering

Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions

no code implementations21 Mar 2019 Siddharth Patki, Andrea F. Daniele, Matthew R. Walter, Thomas M. Howard

The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction.

Natural Language Understanding

Jointly Learning to Construct and Control Agents using Deep Reinforcement Learning

3 code implementations ICLR 2018 Charles Schaff, David Yunis, Ayan Chakrabarti, Matthew R. Walter

The physical design of a robot and the policy that controls its motion are inherently coupled, and should be determined according to the task and environment.

reinforcement-learning Reinforcement Learning (RL)

Satellite Image-based Localization via Learned Embeddings

no code implementations4 Apr 2017 Dong-Ki Kim, Matthew R. Walter

We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment.

Image-Based Localization

Jointly Optimizing Placement and Inference for Beacon-based Localization

1 code implementation24 Mar 2017 Charles Schaff, David Yunis, Ayan Chakrabarti, Matthew R. Walter

The accuracy of such a beacon-based localization system depends both on how beacons are distributed in the environment, and how the robot's location is inferred based on noisy and potentially ambiguous measurements.

Coherent Dialogue with Attention-based Language Models

no code implementations21 Nov 2016 Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism.

Language Modelling

Navigational Instruction Generation as Inverse Reinforcement Learning with Neural Machine Translation

no code implementations11 Oct 2016 Andrea F. Daniele, Mohit Bansal, Matthew R. Walter

We first decide which information to share with the user according to their preferences, using a policy trained from human demonstrations via inverse reinforcement learning.

Machine Translation Navigate +3

Learning Articulated Motion Models from Visual and Lingual Signals

no code implementations17 Nov 2015 Zhengyang Wu, Mohit Bansal, Matthew R. Walter

In this paper, we present a multimodal learning framework that incorporates both visual and lingual information to estimate the structure and parameters that define kinematic models of articulated objects.

Language Modelling Word Embeddings

Accurate Vision-based Vehicle Localization using Satellite Imagery

no code implementations30 Oct 2015 Hang Chu, Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We propose a method for accurately localizing ground vehicles with the aid of satellite imagery.

Visual Localization

What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment

1 code implementation NAACL 2016 Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We propose an end-to-end, domain-independent neural encoder-aligner-decoder model for selective generation, i. e., the joint task of content selection and surface realization.

Data-to-Text Generation

Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences

1 code implementation12 Jun 2015 Hongyuan Mei, Mohit Bansal, Matthew R. Walter

We propose a neural sequence-to-sequence model for direction following, a task that is essential to realizing effective autonomous agents.

Natural Language Understanding

Learning Articulated Motions From Visual Demonstration

1 code implementation5 Feb 2015 Sudeep Pillai, Matthew R. Walter, Seth Teller

This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion.

Motion Segmentation Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.