Search Results for author: Sahil Verma

Found 13 papers, 5 papers with code

Effective Backdoor Mitigation Depends on the Pre-training Objective

no code implementations25 Nov 2023 Sahil Verma, Gantavya Bhatt, Avi Schwarzschild, Soumye Singhal, Arnav Mohanty Das, Chirag Shah, John P Dickerson, Jeff Bilmes

In this work, we demonstrate that the efficacy of CleanCLIP in mitigating backdoors is highly dependent on the particular objective used during model pre-training.

Unveiling the Power of Self-Attention for Shipping Cost Prediction: The Rate Card Transformer

1 code implementation20 Nov 2023 P Aditya Sreekar, Sahil Verma, Varun Madhavan, Abhishek Persad

Shipping cost of these packages are used on the day of shipping (day 0) to estimate profitability of sales.

RecRec: Algorithmic Recourse for Recommender Systems

no code implementations28 Aug 2023 Sahil Verma, Ashudeep Singh, Varich Boonsanong, John P. Dickerson, Chirag Shah

To the best of our knowledge, this work is the first to conceptualize and empirically test a generalized framework for generating recourses for recommender systems.

Recommendation Systems valid

RecXplainer: Amortized Attribute-based Personalized Explanations for Recommender Systems

no code implementations27 Nov 2022 Sahil Verma, Chirag Shah, John P. Dickerson, Anurag Beniwal, Narayanan Sadagopan, Arjun Seshadri

We evaluate RecXplainer on five real-world and large-scale recommendation datasets using five different kinds of recommender systems to demonstrate the efficacy of RecXplainer in capturing users' preferences over item attributes and using them to explain recommendations.

Attribute Recommendation Systems

Pitfalls of Explainable ML: An Industry Perspective

no code implementations14 Jun 2021 Sahil Verma, Aditya Lahiri, John P. Dickerson, Su-In Lee

The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders.

Explainable Artificial Intelligence (XAI)

Counterfactual Explanations for Machine Learning: Challenges Revisited

no code implementations14 Jun 2021 Sahil Verma, John Dickerson, Keegan Hines

Counterfactual explanations (CFEs) are an emerging technique under the umbrella of interpretability of machine learning (ML) models.

BIG-bench Machine Learning counterfactual

Amortized Generation of Sequential Algorithmic Recourses for Black-box Models

1 code implementation7 Jun 2021 Sahil Verma, Keegan Hines, John P. Dickerson

We propose a novel stochastic-control-based approach that generates sequential ARs, that is, ARs that allow x to move stochastically and sequentially across intermediate states to a final state x'.

Removing biased data to improve fairness and accuracy

1 code implementation5 Feb 2021 Sahil Verma, Michael Ernst, Rene Just

Machine learning models trained on such debiased data (a subset of the original training data) have low individual discrimination, often 0%.

BIG-bench Machine Learning Fairness

ShapeFlow: Dynamic Shape Interpreter for TensorFlow

1 code implementation26 Nov 2020 Sahil Verma, Zhendong Su

We present ShapeFlow, a dynamic abstract interpreter for TensorFlow which quickly catches tensor shape incompatibility errors, one of the most common bugs in deep learning code.

Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review

no code implementations20 Oct 2020 Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan E. Hines, John P. Dickerson, Chirag Shah

Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders.

BIG-bench Machine Learning counterfactual +1

Facets of Fairness in Search and Recommendation

no code implementations16 Jul 2020 Sahil Verma, Ruoyuan Gao, Chirag Shah

Several recent works have highlighted how search and recommender systems exhibit bias along different dimensions.

Fairness Recommendation Systems

Benchmarking Symbolic Execution Using Constraint Problems -- Initial Results

no code implementations22 Jan 2020 Sahil Verma, Roland H. C. Yap

We transform CSP benchmarks into C programs suitable for testing the reasoning capabilities of symbolic execution tools.

Software Engineering Logic in Computer Science I.2.0; I.2.1; I.2.3; I.2.4; I.2.8; I.2.11; D.2

Cannot find the paper you are looking for? You can Submit a new open access paper.