no code implementations • 7 Feb 2024 • David Venuto, Sami Nur Islam, Martin Klissarov, Doina Precup, Sherry Yang, Ankit Anand
Pre-trained Vision-Language Models (VLMs) are able to understand visual concepts, describe and decompose complex tasks into sub-tasks, and provide feedback on task completion.
no code implementations • 23 Nov 2022 • David Venuto, Sherry Yang, Pieter Abbeel, Doina Precup, Igor Mordatch, Ofir Nachum
Using massive datasets to train large-scale models has emerged as a dominant approach for broad generalization in natural language and vision applications.
no code implementations • ICLR 2022 • David Venuto, Elaine Lau, Doina Precup, Ofir Nachum
Reasoning about the future -- understanding how decisions in the present time affect outcomes in the future -- is one of the central challenges for reinforcement learning (RL), especially in highly-stochastic or partially observable environments.
no code implementations • 20 Feb 2020 • David Venuto, Jhelum Chakravorty, Leonard Boussioux, Junhao Wang, Gavin McCracken, Doina Precup
Explicit engineering of reward functions for given environments has been a major hindrance to reinforcement learning methods.
1 code implementation • 24 Sep 2019 • David Venuto, Leonard Boussioux, Junhao Wang, Rola Dali, Jhelum Chakravorty, Yoshua Bengio, Doina Precup
We define avoidance learning as the process of optimizing the agent's reward while avoiding dangerous behaviors given by a demonstrator.
3 code implementations • 30 Jan 2014 • David Venuto, Toby Dylan Hocking, Lakjaree Sphanurattana, Masashi Sugiyama
In ranking problems, the goal is to learn a ranking function from labeled pairs of input points.