no code implementations • 2 Feb 2024 • Lilian W. Bialokozowicz, Hoang M. Le, Tristan Sylvain, Peter A. I. Forsyth, Vineel Nagisetty, Greg Mori
This paper introduces the Orthogonal Polynomials Quadrature Algorithm for Survival Analysis (OPSurv), a new method providing time-continuous functional outputs for both single and competing risks scenarios in survival analysis.
no code implementations • CVPR 2023 • Hoang M. Le, Brian Price, Scott Cohen, Michael S. Brown
Inspired by neural implicit representations for 2D images, we propose a method that optimizes a lightweight multi-layer-perceptron (MLP) model during the gamut reduction step to predict the clipped values.
no code implementations • 20 Jun 2022 • Cameron Voloshin, Hoang M. Le, Swarat Chaudhuri, Yisong Yue
We study the problem of policy optimization (PO) with linear temporal logic (LTL) constraints.
3 code implementations • 15 Nov 2019 • Cameron Voloshin, Hoang M. Le, Nan Jiang, Yisong Yue
We offer an experimental benchmark and empirical study for off-policy policy evaluation (OPE) in reinforcement learning, which is a key problem in many safety critical applications.
no code implementations • NeurIPS 2019 • Abhinav Verma, Hoang M. Le, Yisong Yue, Swarat Chaudhuri
First, we view our learning task as optimization in policy space, modulo the constraint that the desired policy has a programmatic representation, and solve this optimization problem using a form of mirror descent that takes a gradient step into the unconstrained policy space and then projects back onto the constrained space.
2 code implementations • 20 Mar 2019 • Hoang M. Le, Cameron Voloshin, Yisong Yue
When learning policies for real-world domains, two important questions arise: (i) how to efficiently use pre-collected off-policy, non-optimal behavior data; and (ii) how to mediate among different competing objectives and constraints.
no code implementations • 18 Mar 2019 • Andrew J. Taylor, Victor D. Dorobantu, Meera Krishnamoorthy, Hoang M. Le, Yisong Yue, Aaron D. Ames
The goal of this paper is to understand the impact of learning on control synthesis from a Lyapunov function perspective.
no code implementations • 4 Mar 2019 • Andrew J. Taylor, Victor D. Dorobantu, Hoang M. Le, Yisong Yue, Aaron D. Ames
Many modern nonlinear control methods aim to endow systems with guaranteed properties, such as stability or safety, and have been successfully applied to the domain of robotics.
no code implementations • ICML 2018 • Hoang M. Le, Nan Jiang, Alekh Agarwal, Miroslav Dudík, Yisong Yue, Hal Daumé III
We study how to effectively leverage expert feedback to learn sequential decision-making policies.
no code implementations • ICML 2017 • Hoang M. Le, Yisong Yue, Peter Carr, Patrick Lucey
We study the problem of imitation learning from demonstrations of multiple coordinating agents.
2 code implementations • 3 Jun 2016 • Hoang M. Le, Andrew Kang, Yisong Yue, Peter Carr
We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input.
no code implementations • CVPR 2016 • Jianhui Chen, Hoang M. Le, Peter Carr, Yisong Yue, James J. Little
We study the problem of online prediction for realtime camera planning, where the goal is to predict smooth trajectories that correctly track and frame objects of interest (e. g., players in a basketball game).