no code implementations • 25 Sep 2024 • P Aditya Sreekar, Sahil Verma, Suransh Chopra, Sarik Ghazarian, Abhishek Persad, Narayanan Sadagopan
We also evaluate the influence of underlying LLMs on prompt based metric performance and recalibrate the SOTA prompt-based metrics with the latest LLMs for fair comparison.
no code implementations • 25 Nov 2023 • Sahil Verma, Gantavya Bhatt, Avi Schwarzschild, Soumye Singhal, Arnav Mohanty Das, Chirag Shah, John P Dickerson, Jeff Bilmes
In this work, we demonstrate that the efficacy of CleanCLIP in mitigating backdoors is highly dependent on the particular objective used during model pre-training.
1 code implementation • 20 Nov 2023 • P Aditya Sreekar, Sahil Verma, Varun Madhavan, Abhishek Persad
Shipping cost of these packages are used on the day of shipping (day 0) to estimate profitability of sales.
no code implementations • 28 Aug 2023 • Sahil Verma, Ashudeep Singh, Varich Boonsanong, John P. Dickerson, Chirag Shah
To the best of our knowledge, this work is the first to conceptualize and empirically test a generalized framework for generating recourses for recommender systems.
no code implementations • 27 Nov 2022 • Sahil Verma, Chirag Shah, John P. Dickerson, Anurag Beniwal, Narayanan Sadagopan, Arjun Seshadri
We evaluate RecXplainer on five real-world and large-scale recommendation datasets using five different kinds of recommender systems to demonstrate the efficacy of RecXplainer in capturing users' preferences over item attributes and using them to explain recommendations.
1 code implementation • 12 Jul 2022 • Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang, Brian Barr, C. Bayan Bruss
By perturbing the input variables in rank order of importance, the goal is to assess the sensitivity of the model's performance.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 14 Jun 2021 • Sahil Verma, John Dickerson, Keegan Hines
Counterfactual explanations (CFEs) are an emerging technique under the umbrella of interpretability of machine learning (ML) models.
no code implementations • 14 Jun 2021 • Sahil Verma, Aditya Lahiri, John P. Dickerson, Su-In Lee
The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders.
1 code implementation • 7 Jun 2021 • Sahil Verma, Keegan Hines, John P. Dickerson
We propose a novel stochastic-control-based approach that generates sequential ARs, that is, ARs that allow x to move stochastically and sequentially across intermediate states to a final state x'.
1 code implementation • 5 Feb 2021 • Sahil Verma, Michael Ernst, Rene Just
Machine learning models trained on such debiased data (a subset of the original training data) have low individual discrimination, often 0%.
1 code implementation • 26 Nov 2020 • Sahil Verma, Zhendong Su
We present ShapeFlow, a dynamic abstract interpreter for TensorFlow which quickly catches tensor shape incompatibility errors, one of the most common bugs in deep learning code.
no code implementations • 20 Oct 2020 • Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan E. Hines, John P. Dickerson, Chirag Shah
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders.
no code implementations • 16 Jul 2020 • Sahil Verma, Ruoyuan Gao, Chirag Shah
Several recent works have highlighted how search and recommender systems exhibit bias along different dimensions.
no code implementations • 22 Jan 2020 • Sahil Verma, Roland H. C. Yap
We transform CSP benchmarks into C programs suitable for testing the reasoning capabilities of symbolic execution tools.
Software Engineering Logic in Computer Science I.2.0; I.2.1; I.2.3; I.2.4; I.2.8; I.2.11; D.2