Search Results for author: Daniel Gordon

Found 12 papers, 7 papers with code

Learning Visual Representation from Human Interactions

no code implementations ICLR 2021 Kiana Ehsani, Daniel Gordon, Thomas Hai Dang Nguyen, Roozbeh Mottaghi, Ali Farhadi

Learning effective representations of visual data that generalize to a variety of downstream tasks has been a long quest for computer vision.

Action Recognition Depth Estimation +2

What Can You Learn from Your Muscles? Learning Visual Representation from Human Interactions

1 code implementation16 Oct 2020 Kiana Ehsani, Daniel Gordon, Thomas Nguyen, Roozbeh Mottaghi, Ali Farhadi

Learning effective representations of visual data that generalize to a variety of downstream tasks has been a long quest for computer vision.

Action Recognition Depth Estimation +2

Watching the World Go By: Representation Learning from Unlabeled Videos

1 code implementation18 Mar 2020 Daniel Gordon, Kiana Ehsani, Dieter Fox, Ali Farhadi

Recent single image unsupervised representation learning techniques show remarkable success on a variety of tasks.

Data Augmentation Representation Learning

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

5 code implementations CVPR 2020 Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox

We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks.

Natural Language Visual Grounding

Shifting the Baseline: Single Modality Performance on Visual Navigation \& QA

no code implementations NAACL 2019 Jesse Thomason, Daniel Gordon, Yonatan Bisk

We demonstrate the surprising strength of unimodal baselines in multimodal domains, and make concrete recommendations for best practices in future research.

Visual Navigation

What Should I Do Now? Marrying Reinforcement Learning and Symbolic Planning

no code implementations6 Jan 2019 Daniel Gordon, Dieter Fox, Ali Farhadi

In this work we propose Hierarchical Planning and Reinforcement Learning (HIP-RL), a method for merging the benefits and capabilities of Symbolic Planning with the learning abilities of Deep Reinforcement Learning.

Question Answering reinforcement-learning +1

Shifting the Baseline: Single Modality Performance on Visual Navigation & QA

no code implementations1 Nov 2018 Jesse Thomason, Daniel Gordon, Yonatan Bisk

We demonstrate the surprising strength of unimodal baselines in multimodal domains, and make concrete recommendations for best practices in future research.

Question Answering Visual Navigation

Visual Semantic Planning using Deep Successor Representations

no code implementations ICCV 2017 Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi

A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world.

Imitation Learning reinforcement Learning

Re3 : Real-Time Recurrent Regression Networks for Visual Tracking of Generic Objects

10 code implementations17 May 2017 Daniel Gordon, Ali Farhadi, Dieter Fox

Robust object tracking requires knowledge and understanding of the object being tracked: its appearance, its motion, and how it changes over time.

Object Tracking regression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.