1 code implementation • 26 Oct 2023 • Antonio Valerio Miceli-Barone, Alex Lascarides, Craig Innes
Simulation is an invaluable tool for developing and evaluating controllers for self-driving cars.
1 code implementation • 20 Sep 2022 • Craig Innes, Subramanian Ramamoorthy
Testing black-box perceptual-control systems in simulation faces two difficulties.
1 code implementation • 21 May 2022 • Anthony L. Corso, Sydney M. Katz, Craig Innes, Xin Du, Subramanian Ramamoorthy, Mykel J. Kochenderfer
We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions.
no code implementations • 25 Feb 2022 • Manu Lahariya, Craig Innes, Chris Develder, Subramanian Ramamoorthy
We simulate the task of using DEA to pull a coin along a surface with frictional contact, using FEM, and evaluate the physics-informed model for simulation, control, and inference.
1 code implementation • 12 Feb 2022 • Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy, Adrian Weller
Imitation learning (IL) is a popular paradigm for training policies in robotic systems when specifying the reward function is difficult.
no code implementations • 4 Feb 2020 • Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy
This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy.
no code implementations • 3 Feb 2020 • Craig Innes, Subramanian Ramamoorthy
We also implement our system on a PR-2 robot to show how a demonstrator can start with an initial (sub-optimal) demonstration, then interactively improve task success by including additional specifications enforced with our differentiable LTL loss.
no code implementations • 27 Feb 2019 • Craig Innes, Alex Lascarides
Methods for learning and planning in sequential decision problems often assume the learner is aware of all possible states and actions in advance.
no code implementations • 10 Jan 2018 • Craig Innes, Alex Lascarides, Stefano V. Albrecht, Subramanian Ramamoorthy, Benjamin Rosman
Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised---its possible states and actions and their causal structure---is known in advance and does not change during learning.