Search Results for author: Subramanian Ramamoorthy

Found 57 papers, 9 papers with code

Comparison of Pedestrian Prediction Models from Trajectory and Appearance Data for Autonomous Driving

no code implementations25 May 2023 Anthony Knittel, Morris Antonello, John Redford, Subramanian Ramamoorthy

Typical predictors use the trajectory history to predict future motion, however in cases of motion initiation, motion in the trajectory may only be clearly visible after a delay, which can result in the pedestrian has entered the road area before an accurate prediction can be made.

Autonomous Driving Trajectory Prediction

Interactive Acquisition of Fine-grained Visual Concepts by Exploiting Semantics of Generic Characterizations in Discourse

1 code implementation5 May 2023 Jonghyuk Park, Alex Lascarides, Subramanian Ramamoorthy

Interactive Task Learning (ITL) concerns learning about unforeseen domain concepts via natural interactions with human users.


Beyond RMSE: Do machine-learned models of road user interaction produce human-like behavior?

no code implementations22 Jun 2022 Aravinda Ramakrishnan Srinivasan, Yi-Shin Lin, Morris Antonello, Anthony Knittel, Mohamed Hasan, Majd Hawasly, John Redford, Subramanian Ramamoorthy, Matteo Leonetti, Jac Billington, Richard Romano, Gustav Markkula

Even though the models' RMSE value differed, all the models captured the kinematic-dependent merging behavior but struggled at varying degrees to capture the more nuanced courtesy lane change and highway lane change behavior.

Autonomous Vehicles

Risk-Driven Design of Perception Systems

1 code implementation21 May 2022 Anthony L. Corso, Sydney M. Katz, Craig Innes, Xin Du, Subramanian Ramamoorthy, Mykel J. Kochenderfer

We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions.

Learning physics-informed simulation models for soft robotic manipulation: A case study with dielectric elastomer actuators

no code implementations25 Feb 2022 Manu Lahariya, Craig Innes, Chris Develder, Subramanian Ramamoorthy

We simulate the task of using DEA to pull a coin along a surface with frictional contact, using FEM, and evaluate the physics-informed model for simulation, control, and inference.

Robust Learning from Observation with Model Misspecification

1 code implementation12 Feb 2022 Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy, Adrian Weller

Imitation learning (IL) is a popular paradigm for training policies in robotic systems when specifying the reward function is difficult.

Continuous Control Imitation Learning +1

Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities

no code implementations27 Jan 2022 Xin Du, Benedicte Legastelois, Bhargavi Ganesh, Ajitha Rajan, Hana Chockler, Vaishak Belle, Stuart Anderson, Subramanian Ramamoorthy

Robustness evaluations like our checklist will be crucial in future safety evaluations of visual perception modules, and be useful for a wide range of stakeholders including designers, deployers, and regulators involved in the certification of these systems.

Autonomous Driving

Applications and Techniques for Fast Machine Learning in Science

no code implementations25 Oct 2021 Allison McCarn Deiana, Nhan Tran, Joshua Agar, Michaela Blott, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Scott Hauck, Mia Liu, Mark S. Neubauer, Jennifer Ngadiuba, Seda Ogrenci-Memik, Maurizio Pierini, Thea Aarrestad, Steffen Bahr, Jurgen Becker, Anne-Sophie Berthold, Richard J. Bonventre, Tomas E. Muller Bravo, Markus Diefenthaler, Zhen Dong, Nick Fritzsche, Amir Gholami, Ekaterina Govorkova, Kyle J Hazelwood, Christian Herwig, Babar Khan, Sehoon Kim, Thomas Klijnsma, Yaling Liu, Kin Ho Lo, Tri Nguyen, Gianantonio Pezzullo, Seyedramin Rasoulinezhad, Ryan A. Rivera, Kate Scholberg, Justin Selig, Sougata Sen, Dmitri Strukov, William Tang, Savannah Thais, Kai Lukas Unger, Ricardo Vilalta, Belinavon Krosigk, Thomas K. Warburton, Maria Acosta Flechas, Anthony Aportela, Thomas Calvet, Leonardo Cristella, Daniel Diaz, Caterina Doglioni, Maria Domenica Galati, Elham E Khoda, Farah Fahim, Davide Giri, Benjamin Hawks, Duc Hoang, Burt Holzman, Shih-Chieh Hsu, Sergo Jindariani, Iris Johnson, Raghav Kansal, Ryan Kastner, Erik Katsavounidis, Jeffrey Krupa, Pan Li, Sandeep Madireddy, Ethan Marx, Patrick McCormack, Andres Meza, Jovan Mitrevski, Mohammed Attia Mohammed, Farouk Mokhtar, Eric Moreno, Srishti Nagu, Rohin Narayan, Noah Palladino, Zhiqiang Que, Sang Eon Park, Subramanian Ramamoorthy, Dylan Rankin, Simon Rothman, ASHISH SHARMA, Sioni Summers, Pietro Vischia, Jean-Roch Vlimant, Olivia Weng

In this community review report, we discuss applications and techniques for fast machine learning (ML) in science -- the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery.

BIG-bench Machine Learning

Active Altruism Learning and Information Sufficiency for Autonomous Driving

no code implementations9 Oct 2021 Jack Geary, Henry Gouk, Subramanian Ramamoorthy

Safe interaction between vehicles requires the ability to choose actions that reveal the preferences of the other vehicles.

Active Learning Autonomous Driving

Beyond Discriminant Patterns: On the Robustness of Decision Rule Ensembles

no code implementations21 Sep 2021 Xin Du, Subramanian Ramamoorthy, Wouter Duivesteijn, Jin Tian, Mykola Pechenizkiy

Specifically, we propose to leverage causal knowledge by regarding the distributional shifts in subpopulations and deployment environments as the results of interventions on the underlying system.

Attainment Regions in Feature-Parameter Space for High-Level Debugging in Autonomous Robots

no code implementations6 Aug 2021 Simón C. Smith, Subramanian Ramamoorthy

When the robot successfully executes the task, we use the attainment regions to gain insights into the limits of the controller, and its robustness.

counterfactual Gaussian Processes

PILOT: Efficient Planning by Imitation Learning and Optimisation for Safe Autonomous Driving

no code implementations1 Nov 2020 Henry Pulver, Francisco Eiras, Ludovico Carozza, Majd Hawasly, Stefano V. Albrecht, Subramanian Ramamoorthy

In this paper, we present PILOT -- a planning framework that comprises an imitation neural network followed by an efficient optimiser that actively rectifies the network's plan, guaranteeing fulfilment of safety and comfort requirements.

Autonomous Driving Imitation Learning

Residual Learning from Demonstration: Adapting DMPs for Contact-rich Manipulation

no code implementations18 Aug 2020 Todor Davchev, Kevin Sebastian Luck, Michael Burke, Franziska Meier, Stefan Schaal, Subramanian Ramamoorthy

Dynamic Movement Primitives (DMP) are a popular way of extracting such policies through behaviour cloning (BC) but can struggle in the context of insertion.

Behavioural cloning Friction +1

Action sequencing using visual permutations

1 code implementation3 Aug 2020 Michael Burke, Kartic Subr, Subramanian Ramamoorthy

Humans can easily reason about the sequence of high level actions needed to complete tasks, but it is particularly difficult to instil this ability in robots trained from relatively few examples.

Semi-supervised Learning From Demonstration Through Program Synthesis: An Inspection Robot Case Study

no code implementations23 Jul 2020 Simón C. Smith, Subramanian Ramamoorthy

The system induces a controller program by learning from immersive demonstrations using sequential importance sampling.

Clustering counterfactual +2

Learning from Demonstration with Weakly Supervised Disentanglement

no code implementations ICLR 2021 Yordan Hristov, Subramanian Ramamoorthy

We show that such alignment is best achieved through the use of labels from the end user, in an appropriately restricted vocabulary, in contrast to the conventional approach of the designer picking a prior over the latent variables.

Disentanglement Reading Comprehension +1

Affordances in Robotic Tasks -- A Survey

no code implementations15 Apr 2020 Paola Ardón, Èric Pairet, Katrin S. Lohan, Subramanian Ramamoorthy, Ronald P. A. Petrick

Affordances are key attributes of what must be perceived by an autonomous robotic agent in order to effectively interact with novel objects.


Integrating Planning and Interpretable Goal Recognition for Autonomous Driving

2 code implementations6 Feb 2020 Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Francisco Eiras, Mihai Dobre, Subramanian Ramamoorthy

The ability to predict the intentions and driving trajectories of other vehicles is a key problem for autonomous driving.


Learning rewards for robotic ultrasound scanning using probabilistic temporal ranking

no code implementations4 Feb 2020 Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy

This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy.

Elaborating on Learned Demonstrations with Temporal Logic Specifications

no code implementations3 Feb 2020 Craig Innes, Subramanian Ramamoorthy

We also implement our system on a PR-2 robot to show how a demonstrator can start with an initial (sub-optimal) demonstration, then interactively improve task success by including additional specifications enforced with our differentiable LTL loss.

Lower Dimensional Kernels for Video Discriminators

1 code implementation18 Dec 2019 Emmanuel Kahembwe, Subramanian Ramamoorthy

This work presents an analysis of the discriminators used in Generative Adversarial Networks (GANs) for Video.

Video Generation

E-HBA: Using Action Policies for Expert Advice and Agent Typification

no code implementations23 Jul 2019 Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy

Past research has studied two approaches to utilise predefined policy sets in repeated interactions: as experts, to dictate our own actions, and as types, to characterise the behaviour of other agents.

Comparative Evaluation of Multiagent Learning Algorithms in a Diverse Set of Ad Hoc Team Problems

no code implementations22 Jul 2019 Stefano V. Albrecht, Subramanian Ramamoorthy

In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems.


Composing Diverse Policies for Temporally Extended Tasks

no code implementations18 Jul 2019 Daniel Angelov, Yordan Hristov, Michael Burke, Subramanian Ramamoorthy

Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics.

Hierarchical Reinforcement Learning Motion Planning

On Convergence and Optimality of Best-Response Learning with Policy Types in Multiagent Systems

no code implementations15 Jul 2019 Stefano V. Albrecht, Subramanian Ramamoorthy

In this paper, we provide theoretical guidance on two central design parameters of this method: Firstly, it is important that the user choose a posterior which can learn the true distribution of latent types, as otherwise suboptimal actions may be chosen.

An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types

no code implementations10 Jul 2019 Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy

To address this problem, researchers have studied learning algorithms which compute posterior beliefs over a hypothesised set of policies, based on the observed actions of the other agents.

Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks (Extended Abstract)

no code implementations10 Jul 2019 Stefano V. Albrecht, Subramanian Ramamoorthy

Belief filtering in DBNs is the task of inferring the belief state (i. e. the probability distribution over process states) based on incomplete and uncertain observations.

Hybrid system identification using switching density networks

1 code implementation9 Jul 2019 Michael Burke, Yordan Hristov, Subramanian Ramamoorthy

This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification.

Imitation Learning regression

Learning Grasp Affordance Reasoning through Semantic Relations

no code implementations24 Jun 2019 Paola Ardón, Èric Pairet, Ronald P. A. Petrick, Subramanian Ramamoorthy, Katrin S. Lohan

We use Markov Logic Networks to build a knowledge base graph representation to obtain a probability distribution of grasp affordances for an object.


DynoPlan: Combining Motion Planning and Deep Neural Network based Controllers for Safe HRL

no code implementations24 Jun 2019 Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy

Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke primitives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states.


Iterative Model-Based Reinforcement Learning Using Simulations in the Differentiable Neural Computer

no code implementations17 Jun 2019 Adeel Mufti, Svetlin Penkov, Subramanian Ramamoorthy

The agent improves its policy in simulations generated by the DNC model and rolls out the policy to the live environment, collecting experiences in new portions or tasks of the environment for further learning.

Model-based Reinforcement Learning reinforcement-learning +1

Learning Programmatically Structured Representations with Perceptor Gradients

no code implementations ICLR 2019 Svetlin Penkov, Subramanian Ramamoorthy

We present the perceptor gradients algorithm -- a novel approach to learning symbolic representations based on the idea of decomposing an agent's policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions.

Using Causal Analysis to Learn Specifications from Task Demonstrations

no code implementations4 Mar 2019 Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy

In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space.


From explanation to synthesis: Compositional program induction for learning from demonstration

no code implementations27 Feb 2019 Michael Burke, Svetlin Penkov, Subramanian Ramamoorthy

This work introduces an approach to learning hybrid systems from demonstrations, with an emphasis on extracting models that are explicitly verifiable and easily interpreted by robot operators.

Program induction

Learning Best Response Strategies for Agents in Ad Exchanges

no code implementations10 Feb 2019 Stavros Gerakaris, Subramanian Ramamoorthy

We address this problem using the Harsanyi-Bellman Ad Hoc Coordination (HBA) algorithm, which conceptualises this interaction in terms of a Stochastic Bayesian Game and arrives at optimal actions by best responding with respect to probabilistic beliefs maintained over a candidate set of opponent behaviour profiles.


Interpretable Latent Spaces for Learning from Demonstration

no code implementations17 Jul 2018 Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy

Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world.

The ORCA Hub: Explainable Offshore Robotics through Intelligent Interfaces

no code implementations6 Mar 2018 Helen Hastie, Katrin Lohan, Mike Chantler, David A. Robb, Subramanian Ramamoorthy, Ron Petrick, Sethu Vijayakumar, David Lane

To enable this to happen, the remote operator will need a high level of situation awareness and key to this is the transparency of what the autonomous systems are doing and why.

Reasoning about Unforeseen Possibilities During Policy Learning

no code implementations10 Jan 2018 Craig Innes, Alex Lascarides, Stefano V. Albrecht, Subramanian Ramamoorthy, Benjamin Rosman

Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised---its possible states and actions and their causal structure---is known in advance and does not change during learning.

Using Program Induction to Interpret Transition System Dynamics

no code implementations26 Jul 2017 Svetlin Penkov, Subramanian Ramamoorthy

Explaining and reasoning about processes which underlie observed black-box phenomena enables the discovery of causal mechanisms, derivation of suitable abstract representations and the formulation of more robust predictions.

Program induction

Grounding Symbols in Multi-Modal Instructions

no code implementations WS 2017 Yordan Hristov, Svetlin Penkov, Alex Lascarides, Subramanian Ramamoorthy

As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability---for instance, learning to ground symbols in the physical world.


Explaining Transition Systems through Program Induction

no code implementations23 May 2017 Svetlin Penkov, Subramanian Ramamoorthy

Explaining and reasoning about processes which underlie observed black-box phenomena enables the discovery of causal mechanisms, derivation of suitable abstract representations and the formulation of more robust predictions.

Program induction

Estimating Activity at Multiple Scales using Spatial Abstractions

no code implementations25 Jul 2016 Majd Hawasly, Florian T. Pokorny, Subramanian Ramamoorthy

Autonomous robots operating in dynamic environments must maintain beliefs over a hypothesis space that is rich enough to represent the activities of interest at different scales.

Clustering Trajectory Clustering

Belief and Truth in Hypothesised Behaviours

no code implementations28 Jul 2015 Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy

The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents.

A Game-Theoretic Model and Best-Response Learning Method for Ad Hoc Coordination in Multiagent Systems

no code implementations3 Jun 2015 Stefano V. Albrecht, Subramanian Ramamoorthy

Based on this model, we derive a solution, called Harsanyi-Bellman Ad Hoc Coordination (HBA), which utilises the concept of Bayesian Nash equilibrium in a planning procedure to find optimal actions in the sense of Bellman optimal control.

Bayesian Policy Reuse

no code implementations1 May 2015 Benjamin Rosman, Majd Hawasly, Subramanian Ramamoorthy

We formalise the problem of policy reuse, and present an algorithm for efficiently responding to a novel task instance by reusing a policy from the library of existing policies, where the choice is based on observed 'signals' which correlate to policy performance.

Bayesian Optimisation

Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks

no code implementations30 Jan 2014 Stefano V. Albrecht, Subramanian Ramamoorthy

Furthermore, we demonstrate how passivity occurs naturally in a complex system such as a multi-robot warehouse, and how PSBF can exploit this to accelerate the filtering task.

Clustering Markov Decision Processes For Continual Transfer

no code implementations15 Nov 2013 M. M. Hassan Mahmud, Majd Hawasly, Benjamin Rosman, Subramanian Ramamoorthy

The source subset forms an `$\epsilon$-net' over the original set of MDPs, in the sense that for each previous MDP $M_p$, there is a source $M^s$ whose optimal policy has $<\epsilon$ regret in $M_p$.

Clustering Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.