Search Results for author: Wenzhen Yuan

Found 17 papers, 11 papers with code

GelBelt: A Vision-based Tactile Sensor for Continuous Sensing of Large Surfaces

no code implementations9 Jan 2025 Mohammad Amin Mirzaee, Hung-Jui Huang, Wenzhen Yuan

Scanning large-scale surfaces is widely demanded in surface reconstruction applications and detecting defects in industries' quality control and maintenance stages.

Surface Reconstruction

Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation

1 code implementation9 Dec 2024 Ruihan Gao, Kangle Deng, Gengshan Yang, Wenzhen Yuan, Jun-Yan Zhu

We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by 2D diffusion model priors on both visual and tactile domains.

3D Generation Image to 3D +2

MM-CamObj: A Comprehensive Multimodal Dataset for Camouflaged Object Scenarios

1 code implementation24 Sep 2024 Jiacheng Ruan, Wenzhen Yuan, Zehao Lin, Ning Liao, Zhiyu Li, Feiyu Xiong, Ting Liu, Yuzhuo Fu

CamObj-Instruct is collected for fine-tuning the LVLMs with improved instruction-following capabilities, and it includes 11, 363 images and 68, 849 conversations with diverse instructions.

Instruction Following

Controllable Visual-Tactile Synthesis

1 code implementation ICCV 2023 Ruihan Gao, Wenzhen Yuan, Jun-Yan Zhu

Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on.

Virtual Try-on

Touch and Go: Learning from Human-Collected Vision and Touch

no code implementations22 Nov 2022 Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens

The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world.

Image Stylization

PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis

1 code implementation12 Sep 2022 Shubham Kanitkar, Helen Jiang, Wenzhen Yuan

To facilitate the study of how holding poses affect grasp stability, we present PoseIt, a novel multi-modal dataset that contains visual and tactile data collected from a full cycle of grasping an object, re-positioning the arm to one of the sampled poses, and shaking the object.

Object

ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer

1 code implementation CVPR 2022 Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu

We present ObjectFolder 2. 0, a large-scale, multisensory dataset of common household objects in the form of implicit neural representations that significantly enhances ObjectFolder 1. 0 in three aspects.

Object

Simulation of Vision-based Tactile Sensors using Physics based Rendering

1 code implementation24 Dec 2020 Arpit Agarwal, Tim Man, Wenzhen Yuan

Tactile sensing has seen a rapid adoption with the advent of vision-based tactile sensors.

Robotics Graphics

Learning an Action-Conditional Model for Haptic Texture Generation

no code implementations28 Sep 2019 Negin Heravi, Wenzhen Yuan, Allison M. Okamura, Jeannette Bohg

Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input.

Texture Synthesis

Real-time Soft Body 3D Proprioception via Deep Vision-based Sensing

1 code implementation8 Apr 2019 Ruoyu Wang, Shiheng Wang, Songyu Du, Erdong Xiao, Wenzhen Yuan, Chen Feng

Soft bodies made from flexible and deformable materials are popular in many robotics applications, but their proprioceptive sensing has been a long-standing challenge.

Robotics

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

no code implementations28 May 2018 Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine

This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions.

Robotic Grasping

ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition

1 code implementation21 Feb 2018 Shan Luo, Wenzhen Yuan, Edward Adelson, Anthony G. Cohn, Raul Fuentes

In this paper, addressing for the first time (to the best of our knowledge) texture recognition from tactile images and vision, we propose a new fusion method named Deep Maximum Covariance Analysis (DMCA) to learn a joint latent space for sharing features through vision and tactile sensing.

The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?

1 code implementation16 Oct 2017 Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine

In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch.

Industrial Robots Robotic Grasping

Cannot find the paper you are looking for? You can Submit a new open access paper.