Search Results for author: Kuan Fang

Found 14 papers, 2 papers with code

Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space

no code implementations17 May 2022 Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine

Our experimental results show that PTP can generate feasible sequences of subgoals that enable the policy to efficiently solve the target tasks.

reinforcement-learning

Discovering Generalizable Skills via Automated Generation of Diverse Tasks

no code implementations26 Jun 2021 Kuan Fang, Yuke Zhu, Silvio Savarese, Li Fei-Fei

To encourage generalizable skills to emerge, our method trains each skill to specialize in the paired task and maximizes the diversity of the generated tasks.

Hierarchical Reinforcement Learning reinforcement-learning

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

1 code implementation4 Apr 2021 Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yuke Zhu

The experimental results in simulation and on the real robot have demonstrated that the use of implicit neural representations and joint learning of grasp affordance and 3D reconstruction have led to state-of-the-art grasping results.

3D Reconstruction Multi-Task Learning

Adaptive Procedural Task Generation for Hard-Exploration Problems

no code implementations ICLR 2021 Kuan Fang, Yuke Zhu, Silvio Savarese, Li Fei-Fei

To enable curriculum learning in the absence of a direct indicator of learning progress, we propose to train the task generator by balancing the agent's performance in the generated tasks and the similarity to the target tasks.

SERank: Optimize Sequencewise Learning to Rank Using Squeeze-and-Excitation Network

1 code implementation7 Jun 2020 RuiXing Wang, Kuan Fang, RiKang Zhou, Zhan Shen, LiWen Fan

Recently, there are a few methods have been proposed which focused on mining information across ranking candidates list for further improvements, such as learning multivariant scoring function or learning contextual embedding.

Learning-To-Rank Question Answering

Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation

no code implementations29 Oct 2019 Kuan Fang, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei

The fundamental challenge of planning for multi-step manipulation is to find effective and plausible action sequences that lead to the task goal.

Variational Inference

KETO: Learning Keypoint Representations for Tool Manipulation

no code implementations26 Oct 2019 Zengyi Qin, Kuan Fang, Yuke Zhu, Li Fei-Fei, Silvio Savarese

For this purpose, we present KETO, a framework of learning keypoint representations of tool-based manipulation.

Robotics

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision

no code implementations25 Jun 2018 Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese

We perform both simulated and real-world experiments on two tool-based manipulation tasks: sweeping and hammering.

Demo2Vec: Reasoning Object Affordances From Online Videos

no code implementations CVPR 2018 Kuan Fang, Te-Lin Wu, Daniel Yang, Silvio Savarese, Joseph J. Lim

Watching expert demonstrations is an important way for humans and robots to reason about affordances of unseen objects.

Recurrent Autoregressive Networks for Online Multi-Object Tracking

no code implementations7 Nov 2017 Kuan Fang, Yu Xiang, Xiaocheng Li, Silvio Savarese

The external memory explicitly stores previous inputs of each trajectory in a time window, while the internal memory learns to summarize long-term tracking history and associate detections by processing the external memory.

Multi-Object Tracking Online Multi-Object Tracking

DeLay: Robust Spatial Layout Estimation for Cluttered Indoor Scenes

no code implementations CVPR 2016 Saumitro Dasgupta, Kuan Fang, Kevin Chen, Silvio Savarese

We consider the problem of estimating the spatial layout of an indoor scene from a monocular RGB image, modeled as the projection of a 3D cuboid.

Cannot find the paper you are looking for? You can Submit a new open access paper.