Search Results for author: Yijiong Lin

Found 5 papers, 2 papers with code

TouchSDF: A DeepSDF Approach for 3D Shape Reconstruction using Vision-Based Tactile Sensing

no code implementations21 Nov 2023 Mauro Comi, Yijiong Lin, Alex Church, Alessio Tonioni, Laurence Aitchison, Nathan F. Lepora

To address these challenges, we propose TouchSDF, a Deep Learning approach for tactile 3D shape reconstruction that leverages the rich information provided by a vision-based tactile sensor and the expressivity of the implicit neural representation DeepSDF.

3D Shape Reconstruction

Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control

no code implementations26 Jul 2023 Yijiong Lin, Mauro Comi, Alex Church, Dandan Zhang, Nathan F. Lepora

To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: \textit{tactile saliency} for robot touch, inspired by the human touch attention mechanism from neuroscience and the visual saliency prediction problem from computer vision.

Pose Estimation Saliency Prediction

Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation

1 code implementation19 Oct 2019 Yijiong Lin, Jiancong Huang, Matthieu Zimmer, Juan Rojas, Paul Weng

Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements.

Data Augmentation reinforcement-learning +1

Invariant Transform Experience Replay: Data Augmentation for Deep Reinforcement Learning

1 code implementation24 Sep 2019 Yijiong Lin, Jiancong Huang, Matthieu Zimmer, Yisheng Guan, Juan Rojas, Paul Weng

Our work demonstrates that invariant transformations on RL trajectories are a promising methodology to speed up learning in deep RL.

Data Augmentation OpenAI Gym +2

Cannot find the paper you are looking for? You can Submit a new open access paper.