Search Results for author: Edward H. Adelson

Found 12 papers, 2 papers with code

PoCo: Policy Composition from and for Heterogeneous Robot Learning

no code implementations4 Feb 2024 Lirui Wang, Jialiang Zhao, Yilun Du, Edward H. Adelson, Russ Tedrake

Training general robotic policies from heterogeneous data for different tasks is a significant challenge.

GelSight Svelte: A Human Finger-shaped Single-camera Tactile Robot Finger with Large Sensing Coverage and Proprioceptive Sensing

no code implementations19 Sep 2023 Jialiang Zhao, Edward H. Adelson

Moreover, existing methods to estimate proprioceptive information such as total forces and torques applied on the finger from camera-based tactile sensors are not effective when the contact geometry is complex.

FingerSLAM: Closed-loop Unknown Object Localization and Reconstruction from Visuo-tactile Feedback

no code implementations14 Mar 2023 Jialiang Zhao, Maria Bauza, Edward H. Adelson

FingerSLAM is constructed with two constituent pose estimators: a multi-pass refined tactile-based pose estimator that captures movements from detailed local textures, and a single-pass vision-based pose estimator that predicts from a global view of the object.

3D Reconstruction Object Localization +1

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

no code implementations28 May 2018 Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine

This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions.

Robotic Grasping

The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?

1 code implementation16 Oct 2017 Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine

In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch.

Industrial Robots Robotic Grasping

Learning visual groups from co-occurrences in space and time

2 code implementations21 Nov 2015 Phillip Isola, Daniel Zoran, Dilip Krishnan, Edward H. Adelson

We propose a self-supervised framework that learns to group visual entities based on their rate of co-occurrence in space and time.

Binary Classification

Discovering States and Transformations in Image Collections

no code implementations CVPR 2015 Phillip Isola, Joseph J. Lim, Edward H. Adelson

Our system works by generalizing across object classes: states and transformations learned on one set of objects are used to interpret the image collection for an entirely new object class.

Object

On the Appearance of Translucent Edges

no code implementations CVPR 2015 Ioannis Gkioulekas, Bruce Walter, Edward H. Adelson, Kavita Bala, Todd Zickler

We also discuss the existence of shape and material metamers, or combinations of distinct shape or material parameters that generate the same edge profile.

Sparkle Vision: Seeing the World through Random Specular Microfacets

no code implementations26 Dec 2014 Zhengdong Zhang, Phillip Isola, Edward H. Adelson

In this paper, we study the problem of reproducing the world lighting from a single image of an object covered with random specular microfacets on the surface.

Cannot find the paper you are looking for? You can Submit a new open access paper.