Search Results for author: Daniel Jing

Found 1 papers, 1 papers with code

What Can I Do Here? Learning New Skills by Imagining Visual Affordances

2 code implementations1 Jun 2021 Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine

In effect, prior data is used to learn what kinds of outcomes may be possible, such that when the robot encounters an unfamiliar setting, it can sample potential outcomes from its model, attempt to reach them, and thereby update both its skills and its outcome model.

Zero-shot Generalization

Cannot find the paper you are looking for? You can Submit a new open access paper.