no code implementations • 29 Mar 2024 • Mauro Comi, Alessio Tonioni, Max Yang, Jonathan Tremblay, Valts Blukis, Yijiong Lin, Nathan F. Lepora, Laurence Aitchison
Touch and vision go hand in hand, mutually enhancing our ability to understand the world.
no code implementations • 21 Nov 2023 • Mauro Comi, Yijiong Lin, Alex Church, Alessio Tonioni, Laurence Aitchison, Nathan F. Lepora
To address these challenges, we propose TouchSDF, a Deep Learning approach for tactile 3D shape reconstruction that leverages the rich information provided by a vision-based tactile sensor and the expressivity of the implicit neural representation DeepSDF.
no code implementations • 26 Jul 2023 • Yijiong Lin, Mauro Comi, Alex Church, Dandan Zhang, Nathan F. Lepora
To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: \textit{tactile saliency} for robot touch, inspired by the human touch attention mechanism from neuroscience and the visual saliency prediction problem from computer vision.
1 code implementation • 19 Oct 2019 • Yijiong Lin, Jiancong Huang, Matthieu Zimmer, Juan Rojas, Paul Weng
Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements.
1 code implementation • 24 Sep 2019 • Yijiong Lin, Jiancong Huang, Matthieu Zimmer, Yisheng Guan, Juan Rojas, Paul Weng
Our work demonstrates that invariant transformations on RL trajectories are a promising methodology to speed up learning in deep RL.