Search Results for author: Wenzhen Yuan

Found 13 papers, 9 papers with code

The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?

1 code implementation16 Oct 2017 Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine

In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch.

Industrial Robots Robotic Grasping

ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition

1 code implementation21 Feb 2018 Shan Luo, Wenzhen Yuan, Edward Adelson, Anthony G. Cohn, Raul Fuentes

In this paper, addressing for the first time (to the best of our knowledge) texture recognition from tactile images and vision, we propose a new fusion method named Deep Maximum Covariance Analysis (DMCA) to learn a joint latent space for sharing features through vision and tactile sensing.

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

no code implementations28 May 2018 Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine

This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions.

Robotic Grasping

Real-time Soft Body 3D Proprioception via Deep Vision-based Sensing

1 code implementation8 Apr 2019 Ruoyu Wang, Shiheng Wang, Songyu Du, Erdong Xiao, Wenzhen Yuan, Chen Feng

Soft bodies made from flexible and deformable materials are popular in many robotics applications, but their proprioceptive sensing has been a long-standing challenge.

Robotics

Learning an Action-Conditional Model for Haptic Texture Generation

no code implementations28 Sep 2019 Negin Heravi, Wenzhen Yuan, Allison M. Okamura, Jeannette Bohg

Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input.

Texture Synthesis

Simulation of Vision-based Tactile Sensors using Physics based Rendering

1 code implementation24 Dec 2020 Arpit Agarwal, Tim Man, Wenzhen Yuan

Tactile sensing has seen a rapid adoption with the advent of vision-based tactile sensors.

Robotics Graphics

ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer

1 code implementation CVPR 2022 Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu

We present ObjectFolder 2. 0, a large-scale, multisensory dataset of common household objects in the form of implicit neural representations that significantly enhances ObjectFolder 1. 0 in three aspects.

Object

PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis

1 code implementation12 Sep 2022 Shubham Kanitkar, Helen Jiang, Wenzhen Yuan

To facilitate the study of how holding poses affect grasp stability, we present PoseIt, a novel multi-modal dataset that contains visual and tactile data collected from a full cycle of grasping an object, re-positioning the arm to one of the sampled poses, and shaking the object.

Object

Touch and Go: Learning from Human-Collected Vision and Touch

no code implementations22 Nov 2022 Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens

The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world.

Image Stylization

Controllable Visual-Tactile Synthesis

1 code implementation ICCV 2023 Ruihan Gao, Wenzhen Yuan, Jun-Yan Zhu

Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on.

Virtual Try-on

Cannot find the paper you are looking for? You can Submit a new open access paper.