Search Results for author: Craig Innes

Found 9 papers, 4 papers with code

Risk-Driven Design of Perception Systems

1 code implementation21 May 2022 Anthony L. Corso, Sydney M. Katz, Craig Innes, Xin Du, Subramanian Ramamoorthy, Mykel J. Kochenderfer

We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions.

Learning physics-informed simulation models for soft robotic manipulation: A case study with dielectric elastomer actuators

no code implementations25 Feb 2022 Manu Lahariya, Craig Innes, Chris Develder, Subramanian Ramamoorthy

We simulate the task of using DEA to pull a coin along a surface with frictional contact, using FEM, and evaluate the physics-informed model for simulation, control, and inference.

Robust Learning from Observation with Model Misspecification

1 code implementation12 Feb 2022 Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy, Adrian Weller

Imitation learning (IL) is a popular paradigm for training policies in robotic systems when specifying the reward function is difficult.

Continuous Control Imitation Learning +1

Learning rewards for robotic ultrasound scanning using probabilistic temporal ranking

no code implementations4 Feb 2020 Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy

This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy.

Elaborating on Learned Demonstrations with Temporal Logic Specifications

no code implementations3 Feb 2020 Craig Innes, Subramanian Ramamoorthy

We also implement our system on a PR-2 robot to show how a demonstrator can start with an initial (sub-optimal) demonstration, then interactively improve task success by including additional specifications enforced with our differentiable LTL loss.

Learning Factored Markov Decision Processes with Unawareness

no code implementations27 Feb 2019 Craig Innes, Alex Lascarides

Methods for learning and planning in sequential decision problems often assume the learner is aware of all possible states and actions in advance.

Reasoning about Unforeseen Possibilities During Policy Learning

no code implementations10 Jan 2018 Craig Innes, Alex Lascarides, Stefano V. Albrecht, Subramanian Ramamoorthy, Benjamin Rosman

Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised---its possible states and actions and their causal structure---is known in advance and does not change during learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.