Search Results for author: Jaeyong Sung

Found 6 papers, 1 papers with code

Learning to Represent Haptic Feedback for Partially-Observable Tasks

no code implementations17 May 2017 Jaeyong Sung, J. Kenneth Salisbury, Ashutosh Saxena

The sense of touch, being the earliest sensory system to develop in a human body [1], plays a critical part of our daily interaction with the environment.

Q-Learning

Robobarista: Learning to Manipulate Novel Objects via Deep Multimodal Embedding

no code implementations12 Jan 2016 Jaeyong Sung, Seok Hyun Jin, Ian Lenz, Ashutosh Saxena

There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on.

Structured Prediction

Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language and Trajectories

no code implementations25 Sep 2015 Jaeyong Sung, Ian Lenz, Ashutosh Saxena

A robot operating in a real-world environment needs to perform reasoning over a variety of sensor modalities such as vision, language and motion trajectories.

Robobarista: Object Part based Transfer of Manipulation Trajectories from Crowd-sourcing in 3D Pointclouds

no code implementations13 Apr 2015 Jaeyong Sung, Seok Hyun Jin, Ashutosh Saxena

We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory.

Structured Prediction

Synthesizing Manipulation Sequences for Under-Specified Tasks using Unrolled Markov Random Fields

no code implementations24 Jun 2013 Jaeyong Sung, Bart Selman, Ashutosh Saxena

Many tasks in human environments require performing a sequence of navigation and manipulation steps involving objects.

Cannot find the paper you are looking for? You can Submit a new open access paper.