Search Results for author: Marco Tulio Ribeiro

Found 23 papers, 14 papers with code

Adaptive Testing and Debugging of NLP Models

no code implementations ACL 2022 Marco Tulio Ribeiro, Scott Lundberg

Current approaches to testing and debugging NLP models rely on highly variable human creativity and extensive labor, or only work for a very restrictive class of bugs.

Targeted Data Generation: Finding and Fixing Model Weaknesses

no code implementations28 May 2023 Zexue He, Marco Tulio Ribeiro, Fereshte Khani

Even when aggregate accuracy is high, state-of-the-art NLP models often fail systematically on specific subgroups of data, resulting in unfair outcomes and eroding user trust.

Data Augmentation Natural Language Inference +2

Collaborative Development of NLP models

no code implementations20 May 2023 Fereshte Khani, Marco Tulio Ribeiro

Our main insight is learning a \emph{local} model for each concept, and a \emph{global} model to integrate the original data with all concepts.

Language Modelling Large Language Model

Supporting Human-AI Collaboration in Auditing LLMs with LLMs

no code implementations19 Apr 2023 Charvi Rastogi, Marco Tulio Ribeiro, Nicholas King, Harsha Nori, Saleema Amershi

Through the design process we highlight the importance of sensemaking and human-AI communication to leverage complementary strengths of humans and generative models in collaborative auditing.

Language Modelling Large Language Model +1

Sparks of Artificial General Intelligence: Early experiments with GPT-4

2 code implementations22 Mar 2023 Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang

We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models.

Arithmetic Reasoning Math Word Problem Solving

ART: Automatic multi-step reasoning and tool-use for large language models

2 code implementations16 Mar 2023 Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro

We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.

MMLU

ScatterShot: Interactive In-context Example Curation for Text Transformation

1 code implementation14 Feb 2023 Tongshuang Wu, Hua Shen, Daniel S. Weld, Jeffrey Heer, Marco Tulio Ribeiro

ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from underexplored or not-yet-saturated slices in an active learning manner, and helps users label more efficiently with the help of an LLM and the current example set.

Active Learning In-Context Learning

Editing Models with Task Arithmetic

6 code implementations8 Dec 2022 Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, Ali Farhadi

Changing how pre-trained models behave -- e. g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems.

Negation Task Arithmetic

Adaptive Testing of Computer Vision Models

1 code implementation ICCV 2023 Irena Gao, Gabriel Ilharco, Scott Lundberg, Marco Tulio Ribeiro

Vision models often fail systematically on groups of data that share common semantic characteristics (e. g., rare objects or unusual scenes), but identifying these failure modes is a challenge.

Image Captioning object-detection +2

Fixing Model Bugs with Natural Language Patches

1 code implementation7 Nov 2022 Shikhar Murty, Christopher D. Manning, Scott Lundberg, Marco Tulio Ribeiro

Current approaches for fixing systematic problems in NLP models (e. g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts.

Relation Extraction Sentiment Analysis

ExSum: From Local Explanations to Model Understanding

1 code implementation NAACL 2022 Yilun Zhou, Marco Tulio Ribeiro, Julie Shah

Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment.

counterfactual

Finding and Fixing Spurious Patterns with Explanations

no code implementations3 Jun 2021 Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar

Image classifiers often use spurious patterns, such as "relying on the presence of a person to detect a tennis racket, which do not generalize.

Data Augmentation

Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models

1 code implementation ACL 2021 Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel S. Weld

While counterfactual examples are useful for analysis and training of NLP models, current generation methods either rely on manual labor to create very few counterfactuals, or only instantiate limited types of perturbations such as paraphrases or word substitutions.

counterfactual Text Generation

Beyond Accuracy: Behavioral Testing of NLP models with CheckList

4 code implementations ACL 2020 Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors.

Question Answering Sentiment Analysis

Are Red Roses Red? Evaluating Consistency of Question-Answering Models

1 code implementation ACL 2019 Marco Tulio Ribeiro, Carlos Guestrin, Sameer Singh

Although current evaluation of question-answering systems treats predictions in isolation, we need to consider the relationship between predictions to measure true understanding.

Question Answering valid +1

Errudite: Scalable, Reproducible, and Testable Error Analysis

1 code implementation ACL 2019 Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel Weld

Though error analysis is crucial to understanding and improving NLP models, the common practice of manual, subjective categorization of a small sample of errors can yield biased and incomplete conclusions.

counterfactual

Semantically Equivalent Adversarial Rules for Debugging NLP models

1 code implementation ACL 2018 Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically.

Data Augmentation Question Answering +3

Programs as Black-Box Explanations

no code implementations22 Nov 2016 Sameer Singh, Marco Tulio Ribeiro, Carlos Guestrin

Recent work in model-agnostic explanations of black-box machine learning has demonstrated that interpretability of complex models does not have to come at the cost of accuracy or model flexibility.

Program induction

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

no code implementations17 Nov 2016 Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior.

Interpretable Machine Learning

Model-Agnostic Interpretability of Machine Learning

no code implementations16 Jun 2016 Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces.

BIG-bench Machine Learning Feature Engineering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.