1 code implementation • 13 Jan 2025 • Buse Sibel Korkmaz, Rahul Nair, Elizabeth M. Daly, Evangelos Anagnostopoulos, Christos Varytimidis, Antonio del Rio Chanona
The experiments on a public hiring dataset and a real-world hiring platform showcase how large language models can assist in identifying and mitigation biases in the real world.
no code implementations • 15 Dec 2024 • Bhanu Tokas, Rahul Nair, Hannah Kerner
However, these metrics fail to measure biases when A is balanced with T. To measure bias amplification in balanced datasets, recent work proposed a predictability-based metric called leakage amplification.
no code implementations • 15 Dec 2024 • Rahul Nair, Gabriel Tseng, Esther Rolf, Bhanu Tokas, Hannah Kerner
While earlier work studied general-purpose image datasets (e. g., ImageNet) and simple tasks like image recognition, we investigated geo-biases in real-world driving datasets on a more complex task: instance segmentation.
no code implementations • 15 Oct 2024 • Nico Wagner, Michael Desmond, Rahul Nair, Zahra Ashktorab, Elizabeth M. Daly, Qian Pan, Martín Santillán Cooper, James M. Johnson, Werner Geyer
In this paper, we introduce a novel method for quantifying uncertainty designed to enhance the trustworthiness of LLM-as-a-Judge evaluations.
no code implementations • 20 May 2024 • Jan-Christoph Klie, Juan Haladjian, Marc Kirchner, Rahul Nair
Basing estimates on small sample sizes, however, can lead to imprecise values for the error rate.
no code implementations • 21 Feb 2024 • Amit Dhurandhar, Rahul Nair, Moninder Singh, Elizabeth Daly, Karthikeyan Natesan Ramamurthy
and a set of LLMs, we rank them without access to any ground truth or reference responses.
no code implementations • 1 Dec 2023 • Svetoslav Nizhnichenkov, Rahul Nair, Elizabeth Daly, Brian Mac Namee
In this paper, we aim to characterise impacted cohorts when mitigation interventions are applied.
1 code implementation • 30 Aug 2023 • Jasmina Gajcin, James McCarthy, Rahul Nair, Radu Marinescu, Elizabeth Daly, Ivana Dusparic
Our approach allows the user to provide trajectory-level feedback on agent's behavior during training, which can be integrated as a reward shaping signal in the following training iteration.
no code implementations • 23 Jun 2023 • Rahul Nair
We consider an aggregated human-AI collaboration aimed at generating a joint interpretable model.
1 code implementation • 10 Jun 2023 • Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly
Understanding the differences between machine learning (ML) models is of interest in scenarios ranging from choosing amongst a set of competing models, to updating a deployed model with new training data.
no code implementations • 19 Feb 2023 • Daniel Karl I. Weidele, Shazia Afzal, Abel N. Valente, Cole Makuch, Owen Cornec, Long Vu, Dharmashankar Subramanian, Werner Geyer, Rahul Nair, Inge Vejsbjerg, Radu Marinescu, Paulito Palmes, Elizabeth M. Daly, Loraine Franke, Daniel Haehn
AutoDOViz seeks to lower the barrier of entry for data scientists in problem specification for reinforcement learning problems, leverage the benefits of AutoDO algorithms for RL pipeline search and finally, create visualizations and policy insights in order to facilitate the typical interactive nature when communicating problem formulation and solution proposals between DO experts and domain experts.
no code implementations • 2 Nov 2022 • Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh
Interpretable and explainable machine learning has seen a recent surge of interest.
no code implementations • 18 Jul 2022 • James McCarthy, Rahul Nair, Elizabeth Daly, Radu Marinescu, Ivana Dusparic
Explainability of Reinforcement Learning (RL) policies remains a challenging research problem, particularly when considering RL in a safety context.
no code implementations • 28 Mar 2022 • Elizabeth M. Daly, Massimiliano Mattetti, Öznur Alkan, Rahul Nair
AI solutions are heavily dependant on the quality and accuracy of the input training data, however the training data may not always fully reflect the most up-to-date policy landscape or may be missing business logic.
no code implementations • 4 Jan 2022 • Öznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth M. Daly, Diptikalyan Saha
However, in such scenarios, it may take time for sufficient training data to accumulate in order to retrain the model to reflect the new decision boundaries.
no code implementations • 17 Dec 2021 • Jasmina Gajcin, Rahul Nair, Tejaswini Pedapati, Radu Marinescu, Elizabeth Daly, Ivana Dusparic
In complex tasks where the reward function is not straightforward and consists of a set of objectives, multiple reinforcement learning (RL) policies that perform task adequately, but employ different strategies can be trained by adjusting the impact of individual objectives on reward function.
1 code implementation • 24 Oct 2021 • Jochen Görtler, Fred Hohman, Dominik Moritz, Kanit Wongsuphasawat, Donghao Ren, Rahul Nair, Marc Kirchner, Kayur Patel
The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances.
no code implementations • 25 Jun 2020 • Sergiy Zhuk, Jonathan P. Epperlein, Rahul Nair, Seshu Thirupati, Pol Mac Aonghusa, Ronan Cahill, Donal O'Shea
Intra-operative identification of malignant versus benign or healthy tissue is a major challenge in fluorescence guided cancer surgery.
no code implementations • COLING 2018 • Rahul Nair, Killian Levacher, Martin Stephenson
Large organizations spend considerable resources in reviewing regulations and ensuring that their business processes are compliant with the law.
no code implementations • 13 Dec 2015 • Mojmir Mutny, Rahul Nair, Jens-Malte Gottfried
Secondly, we used this dataset to construct a learning model to predict real valued corrections to the ToF data using Random Forests.
no code implementations • ICCV 2015 • Rahul Nair, Andrew Fitzgibbon, Daniel Kondermann, Carsten Rother
Stereo reconstruction in presence of reality faces many challenges that still need to be addressed.
no code implementations • CVPR 2013 • Ralf Haeusler, Rahul Nair, Daniel Kondermann
With the aim to improve accuracy of stereo confidence measures, we apply the random decision forest framework to a large set of diverse stereo confidence measures.