Search Results for author: Harsha Nori

Found 17 papers, 13 papers with code

Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models

2 code implementations9 Apr 2024 Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana

We then compare the few-shot learning performance of LLMs on datasets that were seen during training to the performance on datasets released after training.

Few-Shot Learning Language Modelling +2

Elephants Never Forget: Testing Language Models for Memorization of Tabular Data

1 code implementation11 Mar 2024 Sebastian Bordt, Harsha Nori, Rich Caruana

While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over.

Language Modelling Memorization

Differentially Private Synthetic Data via Foundation Model APIs 2: Text

1 code implementation4 Mar 2024 Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin

Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models.

Privacy Preserving

Data Science with LLMs and Interpretable Models

1 code implementation22 Feb 2024 Sebastian Bordt, Ben Lengerich, Harsha Nori, Rich Caruana

Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans.

Additive models Question Answering

Interpretable Predictive Models to Understand Risk Factors for Maternal and Fetal Outcomes

no code implementations16 Oct 2023 Tomas M. Bosschieter, Zifei Xu, Hui Lan, Benjamin J. Lengerich, Harsha Nori, Ian Painter, Vivienne Souter, Rich Caruana

The interpretability of the EBM models reveals surprising insights into the features contributing to risk (e. g. maternal height is the second most important feature for shoulder dystocia) and may have potential for clinical application in the prediction and prevention of serious complications in pregnancy.

LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs

1 code implementation2 Aug 2023 Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana

We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components.

Additive models

Differentially Private Synthetic Data via Foundation Model APIs 1: Images

1 code implementation24 May 2023 Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin

We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images.

Supporting Human-AI Collaboration in Auditing LLMs with LLMs

no code implementations19 Apr 2023 Charvi Rastogi, Marco Tulio Ribeiro, Nicholas King, Harsha Nori, Saleema Amershi

Through the design process we highlight the importance of sensemaking and human-AI communication to leverage complementary strengths of humans and generative models in collaborative auditing.

Language Modelling Large Language Model +1

Sparks of Artificial General Intelligence: Early experiments with GPT-4

2 code implementations22 Mar 2023 Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang

We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models.

Arithmetic Reasoning Math Word Problem Solving

Using Interpretable Machine Learning to Predict Maternal and Fetal Outcomes

no code implementations12 Jul 2022 Tomas M. Bosschieter, Zifei Xu, Hui Lan, Benjamin J. Lengerich, Harsha Nori, Kristin Sitcov, Vivienne Souter, Rich Caruana

Most pregnancies and births result in a good outcome, but complications are not uncommon and when they do occur, they can be associated with serious implications for mothers and babies.

BIG-bench Machine Learning Interpretable Machine Learning

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

2 code implementations30 Jun 2022 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed.

Additive models BIG-bench Machine Learning +1

Differentially Private Estimation of Heterogeneous Causal Effects

1 code implementation22 Feb 2022 Fengshi Niu, Harsha Nori, Brian Quistorff, Rich Caruana, Donald Ngwe, Aadharsh Kannan

Our meta-algorithm can work with simple, single-stage CATE estimators such as S-learner and more complex multi-stage estimators such as DR and R-learner.

GAM Changer: Editing Generalized Additive Models with Interactive Visualization

1 code implementation6 Dec 2021 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Recent strides in interpretable machine learning (ML) research reveal that models exploit undesirable patterns in the data to make predictions, which potentially causes harms in deployment.

Additive models Interpretable Machine Learning

Accuracy, Interpretability, and Differential Privacy via Explainable Boosting

1 code implementation17 Jun 2021 Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, Janardhan Kulkarni

We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy.

regression

InterpretML: A Unified Framework for Machine Learning Interpretability

2 code implementations19 Sep 2019 Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana

InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers.

Additive models BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.