Search Results for author: Shubham Sharma

Found 16 papers, 3 papers with code

Significance of the levels of spectral valleys with application to front/back distinction of vowel sounds

no code implementations16 Jun 2015 T. V. Ananthapadmanabha, A. G. Ramakrishnan, Shubham Sharma

An objective critical distance (OCD) has been defined as that spacing between adjacent formants, when the level of the valley between them reaches the mean spectral level.

Compound Type Identification in Sanskrit: What Roles do the Corpus and Grammar Play?

no code implementations WS 2016 Amrith Krishna, Pavankumar Satuluri, Shubham Sharma, Apurv Kumar, Pawan Goyal

We construct an elaborate features space for our system by combining conditional rules from the grammar \textit{Adṣṭ{\=a}dhy{\=a}y{\=\i}}, semantic relations between the compound components from a lexical database \textit{Amarakoṣa} and linguistic structures from the data using Adaptor Grammars.

Classification General Classification +2

CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models

no code implementations20 May 2019 Shubham Sharma, Jette Henderson, Joydeep Ghosh

Given a model and an input instance, CERTIFAI uses a custom genetic algorithm to generate counterfactuals: instances close to the input that change the prediction of the model.

counterfactual Fairness

FaiR-N: Fair and Robust Neural Networks for Structured Data

no code implementations13 Oct 2020 Shubham Sharma, Alan H. Gee, David Paydarfar, Joydeep Ghosh

Fairness in machine learning is crucial when individuals are subject to automated decisions made by models in high-stake domains.

Adversarial Robustness Attribute +1

When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning

1 code implementation27 Jun 2022 Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan

This brings up a new question: is it possible to combine learning from limited real data in offline RL and unrestricted exploration through imperfect simulators in online RL to address the drawbacks of both approaches?

Offline RL reinforcement-learning +1

FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

no code implementations10 Oct 2022 Shubham Sharma, Jette Henderson, Joydeep Ghosh

In this paper, we propose FEAMOE, a novel "mixture-of-experts" inspired framework aimed at learning fairer, more explainable/interpretable models that can also rapidly adjust to drifts in both the accuracy and the fairness of a classifier.

Fairness

FASTER-CE: Fast, Sparse, Transparent, and Robust Counterfactual Explanations

no code implementations12 Oct 2022 Shubham Sharma, Alan H. Gee, Jette Henderson, Joydeep Ghosh

The ability to quickly examine combinations of the most promising gradient directions as well as to incorporate additional user-defined constraints allows us to generate multiple counterfactual explanations that are sparse, realistic, and robust to input manipulations.

counterfactual Explanation Generation

Hope Speech Detection on Social Media Platforms

1 code implementation14 Nov 2022 Pranjal Aggarwal, Pasupuleti Chandana, Jagrut Nemade, Shubham Sharma, Sunil Saumya, Shankar Biradar

Since personal computers became widely available in the consumer market, the amount of harmful content on the internet has significantly expanded.

Hope Speech Detection Sentence

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

no code implementations13 Jul 2023 Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni

Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.

counterfactual Counterfactual Explanation +3

SafeAR: Safe Algorithmic Recourse by Risk-Aware Policies

no code implementations23 Aug 2023 Haochen Wu, Shubham Sharma, Sunandita Patra, Sriram Gopalakrishnan

However, the uncertainties of feature changes and the risk of higher than average costs in recourse have not been considered.

Fair Coresets via Optimal Transport

no code implementations9 Nov 2023 Zikai Xiong, Niccolò Dalmasso, Shubham Sharma, Freddy Lecue, Daniele Magazzeni, Vamsi K. Potluru, Tucker Balch, Manuela Veloso

In this work, we present fair Wasserstein coresets (FWC), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.

Clustering Decision Making +1

The Effect of Data Poisoning on Counterfactual Explanations

1 code implementation13 Feb 2024 André Artelt, Shubham Sharma, Freddy Lecué, Barbara Hammer

Counterfactual explanations provide a popular method for analyzing the predictions of black-box systems, and they can offer the opportunity for computational recourse by suggesting actionable changes on how to change the input to obtain a different (i. e. more favorable) system output.

counterfactual Data Poisoning

REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values

no code implementations13 Mar 2024 Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso

In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.

Fairness feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.