Search Results for author: Ankur Handa

Found 23 papers, 10 papers with code

Factory: Fast Contact for Robotic Assembly

no code implementations7 May 2022 Yashraj Narang, Kier Storey, Iretiayo Akinola, Miles Macklin, Philipp Reist, Lukasz Wawrzyniak, Yunrong Guo, Adam Moravanszky, Gavriel State, Michelle Lu, Ankur Handa, Dieter Fox

We aim for Factory to open the doors to using simulation for robotic assembly, as well as many other contact-rich applications in robotics.

Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger

1 code implementation22 Aug 2021 Arthur Allshire, Mayank Mittal, Varun Lodaya, Viktor Makoviychuk, Denys Makoviichuk, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Ankur Handa, Animesh Garg

We present a system for learning a challenging dexterous manipulation task involving moving a cube to an arbitrary 6-DoF pose with only 3-fingers trained with NVIDIA's IsaacGym simulator.


Perspectives on Sim2Real Transfer for Robotics: A Summary of the R:SS 2020 Workshop

no code implementations7 Dec 2020 Sebastian Höfer, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Florian Golemo, Melissa Mozifian, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, C. Karen Liu, Jan Peters, Shuran Song, Peter Welinder, Martha White

This report presents the debates, posters, and discussions of the Sim2Real workshop held in conjunction with the 2020 edition of the "Robotics: Science and System" conference.

Information Theoretic Model Predictive Q-Learning

no code implementations31 Dec 2019 Mohak Bhardwaj, Ankur Handa, Dieter Fox, Byron Boots

Model-free Reinforcement Learning (RL) works well when experience can be collected cheaply and model-based RL is effective when system dynamics can be modeled accurately.

Decision Making Q-Learning +1

DexPilot: Vision Based Teleoperation of Dexterous Robotic Hand-Arm System

no code implementations7 Oct 2019 Ankur Handa, Karl Van Wyk, Wei Yang, Jacky Liang, Yu-Wei Chao, Qian Wan, Stan Birchfield, Nathan Ratliff, Dieter Fox

Teleoperation offers the possibility of imparting robotic systems with sophisticated reasoning skills, intuition, and creativity to perform tasks.

ContactGrasp: Functional Multi-finger Grasp Synthesis from Contact

4 code implementations7 Apr 2019 Samarth Brahmbhatt, Ankur Handa, James Hays, Dieter Fox

Using a dataset of contact demonstrations from humans grasping diverse household objects, we synthesize functional grasps for three hand models and two functional intents.

GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning

no code implementations12 Oct 2018 Jacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, Dieter Fox

Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks.


Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience

no code implementations12 Oct 2018 Yevgen Chebotar, Ankur Handa, Viktor Makoviychuk, Miles Macklin, Jan Issac, Nathan Ratliff, Dieter Fox

In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world.

Domain Randomization and Generative Models for Robotic Grasping

no code implementations17 Oct 2017 Joshua Tobin, Lukas Biewald, Rocky Duan, Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew, Jonas Schneider, Peter Welinder, Wojciech Zaremba, Pieter Abbeel

In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis.

Robotic Grasping

SceneNet RGB-D: Can 5M Synthetic Images Beat Generic ImageNet Pre-Training on Indoor Segmentation?

no code implementations ICCV 2017 John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J. Davison

We compare the semantic segmentation performance of network weights produced from pre-training on RGB images from our dataset against generic VGG-16 ImageNet weights.

Instance Segmentation Object Detection +4

Self-Supervised Siamese Learning on Stereo Image Pairs for Depth Estimation in Robotic Surgery

no code implementations17 May 2017 Menglong Ye, Edward Johns, Ankur Handa, Lin Zhang, Philip Pratt, Guang-Zhong Yang

Robotic surgery has become a powerful tool for performing minimally invasive procedures, providing advantages in dexterity, precision, and 3D vision, over traditional surgery.

Depth Estimation

SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth

1 code implementation15 Dec 2016 John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J. Davison

We introduce SceneNet RGB-D, expanding the previous work of SceneNet to enable large scale photorealistic rendering of indoor scene trajectories.

3D Reconstruction Depth Estimation +6

SemanticFusion: Dense 3D Semantic Mapping with Convolutional Neural Networks

no code implementations16 Sep 2016 John McCormac, Ankur Handa, Andrew Davison, Stefan Leutenegger

This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions.


gvnn: Neural Network Library for Geometric Computer Vision

1 code implementation25 Jul 2016 Ankur Handa, Michael Bloesch, Viorica Patraucean, Simon Stent, John McCormac, Andrew Davison

We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning.

Image Reconstruction Visual Odometry

Understanding Real World Indoor Scenes With Synthetic Data

no code implementations CVPR 2016 Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, Roberto Cipolla

Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments.

Scene Understanding

HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor

no code implementations4 Apr 2016 Shuda Li, Ankur Handa, Yang Zhang, Andrew Calway

We describe a new method for comparing frame appearance in a frame-to-model 3-D mapping and tracking system using an low dynamic range (LDR) RGB-D camera which is robust to brightness changes caused by auto exposure.


SceneNet: Understanding Real World Indoor Scenes With Synthetic Data

1 code implementation22 Nov 2015 Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, Roberto Cipolla

Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments.

Frame Scene Understanding

Spatio-temporal video autoencoder with differentiable memory

1 code implementation19 Nov 2015 Viorica Patraucean, Ankur Handa, Roberto Cipolla

At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame.

Frame Motion Estimation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.