Search Results for author: Rahul Nair

Found 17 papers, 3 papers with code

Explaining Knock-on Effects of Bias Mitigation

no code implementations1 Dec 2023 Svetoslav Nizhnichenkov, Rahul Nair, Elizabeth Daly, Brian Mac Namee

In this paper, we aim to characterise impacted cohorts when mitigation interventions are applied.

Fairness

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

1 code implementation30 Aug 2023 Jasmina Gajcin, James McCarthy, Rahul Nair, Radu Marinescu, Elizabeth Daly, Ivana Dusparic

Our approach allows the user to provide trajectory-level feedback on agent's behavior during training, which can be integrated as a reward shaping signal in the following training iteration.

Reinforcement Learning (RL)

Co-creating a globally interpretable model with human input

no code implementations23 Jun 2023 Rahul Nair

We consider an aggregated human-AI collaboration aimed at generating a joint interpretable model.

Decision Making

Interpretable Differencing of Machine Learning Models

1 code implementation10 Jun 2023 Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly

Understanding the differences between machine learning (ML) models is of interest in scenarios ranging from choosing amongst a set of competing models, to updating a deployed model with new training data.

Classification

AutoDOViz: Human-Centered Automation for Decision Optimization

no code implementations19 Feb 2023 Daniel Karl I. Weidele, Shazia Afzal, Abel N. Valente, Cole Makuch, Owen Cornec, Long Vu, Dharmashankar Subramanian, Werner Geyer, Rahul Nair, Inge Vejsbjerg, Radu Marinescu, Paulito Palmes, Elizabeth M. Daly, Loraine Franke, Daniel Haehn

AutoDOViz seeks to lower the barrier of entry for data scientists in problem specification for reinforcement learning problems, leverage the benefits of AutoDO algorithms for RL pipeline search and finally, create visualizations and policy insights in order to facilitate the typical interactive nature when communicating problem formulation and solution proposals between DO experts and domain experts.

AutoML reinforcement-learning +1

Boolean Decision Rules for Reinforcement Learning Policy Summarisation

no code implementations18 Jul 2022 James McCarthy, Rahul Nair, Elizabeth Daly, Radu Marinescu, Ivana Dusparic

Explainability of Reinforcement Learning (RL) policies remains a challenging research problem, particularly when considering RL in a safety context.

reinforcement-learning Reinforcement Learning (RL)

User Driven Model Adjustment via Boolean Rule Explanations

no code implementations28 Mar 2022 Elizabeth M. Daly, Massimiliano Mattetti, Öznur Alkan, Rahul Nair

AI solutions are heavily dependant on the quality and accuracy of the input training data, however the training data may not always fully reflect the most up-to-date policy landscape or may be missing business logic.

Decision Making

FROTE: Feedback Rule-Driven Oversampling for Editing Models

no code implementations4 Jan 2022 Öznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth M. Daly, Diptikalyan Saha

However, in such scenarios, it may take time for sufficient training data to accumulate in order to retrain the model to reflect the new decision boundaries.

Data Augmentation Management

Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents

no code implementations17 Dec 2021 Jasmina Gajcin, Rahul Nair, Tejaswini Pedapati, Radu Marinescu, Elizabeth Daly, Ivana Dusparic

In complex tasks where the reward function is not straightforward and consists of a set of objectives, multiple reinforcement learning (RL) policies that perform task adequately, but employ different strategies can be trained by adjusting the impact of individual objectives on reward function.

Autonomous Driving reinforcement-learning +1

Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels

1 code implementation24 Oct 2021 Jochen Görtler, Fred Hohman, Dominik Moritz, Kanit Wongsuphasawat, Donghao Ren, Rahul Nair, Marc Kirchner, Kayur Patel

The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances.

BIG-bench Machine Learning

Perfusion Quantification from Endoscopic Videos: Learning to Read Tumor Signatures

no code implementations25 Jun 2020 Sergiy Zhuk, Jonathan P. Epperlein, Rahul Nair, Seshu Thirupati, Pol Mac Aonghusa, Ronan Cahill, Donal O'Shea

Intra-operative identification of malignant versus benign or healthy tissue is a major challenge in fluorescence guided cancer surgery.

Towards Automated Extraction of Business Constraints from Unstructured Regulatory Text

no code implementations COLING 2018 Rahul Nair, Killian Levacher, Martin Stephenson

Large organizations spend considerable resources in reviewing regulations and ensuring that their business processes are compliant with the law.

Entity Extraction using GAN Question Answering

Learning the Correction for Multi-Path Deviations in Time-of-Flight Cameras

no code implementations13 Dec 2015 Mojmir Mutny, Rahul Nair, Jens-Malte Gottfried

Secondly, we used this dataset to construct a learning model to predict real valued corrections to the ToF data using Random Forests.

Reflection Modeling for Passive Stereo

no code implementations ICCV 2015 Rahul Nair, Andrew Fitzgibbon, Daniel Kondermann, Carsten Rother

Stereo reconstruction in presence of reality faces many challenges that still need to be addressed.

Ensemble Learning for Confidence Measures in Stereo Vision

no code implementations CVPR 2013 Ralf Haeusler, Rahul Nair, Daniel Kondermann

With the aim to improve accuracy of stereo confidence measures, we apply the random decision forest framework to a large set of diverse stereo confidence measures.

Ensemble Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.