Search Results for author: Ajay Kumar Tanwani

Found 8 papers, 4 papers with code

RepsNet: Combining Vision with Language for Automated Medical Reports

no code implementations27 Sep 2022 Ajay Kumar Tanwani, Joelle Barral, Daniel Freedman

We formulate the problem in a visual question answering setting to handle both categorical and descriptive natural language answers.

Contrastive Learning Descriptive +3

VisuoSpatial Foresight for Physical Sequential Fabric Manipulation

no code implementations19 Feb 2021 Ryan Hoque, Daniel Seita, Ashwin Balakrishna, Aditya Ganapathi, Ajay Kumar Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg

We build upon the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different sequential fabric manipulation tasks with a single goal-conditioned policy.

DIRL: Domain-Invariant Representation Learning for Sim-to-Real Transfer

no code implementations15 Nov 2020 Ajay Kumar Tanwani

Generating large-scale synthetic data in simulation is a feasible alternative to collecting/labelling real data for training vision-based deep learning models, albeit the modelling inaccuracies do not generalize to the physical world.

Object Recognition Representation Learning

Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos

no code implementations31 May 2020 Ajay Kumar Tanwani, Pierre Sermanet, Andy Yan, Raghav Anand, Mariano Phielipp, Ken Goldberg

We demonstrate the use of this representation to imitate surgical suturing motions from publicly available videos of the JIGSAWS dataset.

Action Segmentation Metric Learning +1

Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor

1 code implementation23 Sep 2019 Daniel Seita, Aditya Ganapathi, Ryan Hoque, Minho Hwang, Edward Cen, Ajay Kumar Tanwani, Ashwin Balakrishna, Brijen Thananjeyan, Jeffrey Ichnowski, Nawid Jamali, Katsu Yamane, Soshi Iba, John Canny, Ken Goldberg

In 180 physical experiments with the da Vinci Research Kit (dVRK) surgical robot, RGBD policies trained in simulation attain coverage of 83% to 95% depending on difficulty tier, suggesting that effective fabric smoothing policies can be learned from an algorithmic supervisor and that depth sensing is a valuable addition to color alone.

Imitation Learning

Dynamic Regret Convergence Analysis and an Adaptive Regularization Algorithm for On-Policy Robot Imitation Learning

1 code implementation6 Nov 2018 Jonathan N. Lee, Michael Laskey, Ajay Kumar Tanwani, Anil Aswani, Ken Goldberg

In this article, we reframe this result using dynamic regret theory from the field of online optimization and show that dynamic regret can be applied to any on-policy algorithm to analyze its convergence and optimality.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.