Search Results for author: Hanna Wallach

Found 40 papers, 17 papers with code

Datasheets for Datasets

21 code implementations23 Mar 2018 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford

The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains.

BIG-bench Machine Learning

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

4 code implementations27 Jan 2019 Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.

Classification General Classification

Bayesian Poisson Tensor Factorization for Inferring Multilateral Relations from Sparse Dyadic Event Counts

1 code implementation10 Jun 2015 Aaron Schein, John Paisley, David M. Blei, Hanna Wallach

We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods.

REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

1 code implementation5 May 2022 Jessie J. Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, Jennifer Wortman Vaughan

Transparency around limitations can improve the scientific rigor of research, help ensure appropriate interpretation of research findings, and make research claims more credible.

BIG-bench Machine Learning

Language (Technology) is Power: A Critical Survey of "Bias" in NLP

1 code implementation28 May 2020 Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach

We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process.

Bayesian Poisson Tucker Decomposition for Learning the Structure of International Relations

1 code implementation6 Jun 2016 Aaron Schein, Mingyuan Zhou, David M. Blei, Hanna Wallach

We introduce Bayesian Poisson Tucker decomposition (BPTD) for modeling country--country interaction event data.

Poisson--Gamma Dynamical Systems

1 code implementation19 Jan 2017 Aaron Schein, Mingyuan Zhou, Hanna Wallach

We introduce a new dynamical system for sequentially observed multivariate count data.

Inductive Bias

Poisson-Gamma dynamical systems

1 code implementation NeurIPS 2016 Aaron Schein, Hanna Wallach, Mingyuan Zhou

This paper presents a dynamical system based on the Poisson-Gamma construction for sequentially observed multivariate count data.

Inductive Bias

Weight of Evidence as a Basis for Human-Oriented Explanations

1 code implementation29 Oct 2019 David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach

Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods.

Philosophy

Poisson-Randomized Gamma Dynamical Systems

1 code implementation NeurIPS 2019 Aaron Schein, Scott W. Linderman, Mingyuan Zhou, David M. Blei, Hanna Wallach

This paper presents the Poisson-randomized gamma dynamical system (PRGDS), a model for sequentially observed count tensors that encodes a strong inductive bias toward sparsity and burstiness.

Inductive Bias

Locally Private Bayesian Inference for Count Models

1 code implementation22 Mar 2018 Aaron Schein, Zhiwei Steven Wu, Alexandra Schofield, Mingyuan Zhou, Hanna Wallach

We present a general method for privacy-preserving Bayesian inference in Poisson factorization, a broad class of models that includes some of the most widely used models in the social sciences.

Bayesian Inference Link Prediction +1

Manipulating and Measuring Model Interpretability

1 code implementation21 Feb 2018 Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, Hanna Wallach

With machine learning models being increasingly used to aid decision making even in high-stakes domains, there has been a growing interest in developing interpretable models.

BIG-bench Machine Learning Decision Making +1

Flexible Models for Microclustering with Application to Entity Resolution

no code implementations NeurIPS 2016 Giacomo Zanella, Brenda Betancourt, Hanna Wallach, Jeffrey Miller, Abbas Zaidi, Rebecca C. Steorts

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points.

Clustering Entity Resolution

The Social Dynamics of Language Change in Online Networks

no code implementations7 Sep 2016 Rahul Goel, Sandeep Soni, Naman Goyal, John Paparrizos, Hanna Wallach, Fernando Diaz, Jacob Eisenstein

Language change is a complex social phenomenon, revealing pathways of communication and sociocultural influence.

Microclustering: When the Cluster Sizes Grow Sublinearly with the Size of the Data Set

no code implementations2 Dec 2015 Jeffrey Miller, Brenda Betancourt, Abbas Zaidi, Hanna Wallach, Rebecca C. Steorts

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points.

Clustering Entity Resolution

Inferring Multilateral Relations from Dynamic Pairwise Interactions

no code implementations15 Nov 2013 Aaron Schein, Juston Moore, Hanna Wallach

Correlations between anomalous activity patterns can yield pertinent information about complex social processes: a significant deviation from normal behavior, exhibited simultaneously by multiple pairs of actors, provides evidence for some underlying relationship involving those pairs---i. e., a multilateral relation.

Improving fairness in machine learning systems: What do industry practitioners need?

no code implementations13 Dec 2018 Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, Hanna Wallach

The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.

BIG-bench Machine Learning Fairness

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Quantifying the Semantic Core of Gender Systems

no code implementations IJCNLP 2019 Adina Williams, Ryan Cotterell, Lawrence Wolf-Sonkin, Damián Blasi, Hanna Wallach

To that end, we use canonical correlation analysis to correlate the grammatical gender of inanimate nouns with an externally grounded definition of their lexical semantics.

Measurement and Fairness

no code implementations11 Dec 2019 Abigail Z. Jacobs, Hanna Wallach

We argue that this contestedness underlies recent debates about fairness definitions: although these debates appear to be about different operationalizations, they are, in fact, debates about different theoretical understandings of fairness.

Fairness

On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs

no code implementations3 May 2020 Adina Williams, Ryan Cotterell, Lawrence Wolf-Sonkin, Damián Blasi, Hanna Wallach

We also find that there are statistically significant relationships between the grammatical genders of inanimate nouns and the verbs that take those nouns as direct objects, as indirect objects, and as subjects.

Language (Technology) is Power: A Critical Survey of ``Bias'' in NLP

no code implementations ACL 2020 Su Lin Blodgett, Solon Barocas, Hal Daum{\'e} III, Hanna Wallach

We survey 146 papers analyzing {``}bias{''} in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing {``}bias{''} is an inherently normative process.

Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs

no code implementations10 Mar 2021 Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach

Disaggregated evaluations of AI systems, in which system performance is assessed and reported separately for different groups of people, are conceptually simple.

Doubly Non-Central Beta Matrix Factorization for DNA Methylation Data

no code implementations12 Jun 2021 Aaron Schein, Anjali Nagulpally, Hanna Wallach, Patrick Flaherty

We present a new non-negative matrix factorization model for $(0, 1)$ bounded-support data based on the doubly non-central beta (DNCB) distribution, a generalization of the beta distribution.

Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

no code implementations10 Dec 2021 Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, Hanna Wallach

Various tools and practices have been developed to support practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems.

Fairness

Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata

no code implementations6 Jun 2022 Amy K. Heger, Liz B. Marquis, Mihaela Vorvoreanu, Hanna Wallach, Jennifer Wortman Vaughan

Despite the fact that data documentation frameworks are often motivated from the perspective of responsible AI, participants did not make the connection between the questions that they were asked to answer and their responsible AI implications.

BIG-bench Machine Learning

"One-Size-Fits-All"? Examining Expectations around What Constitute "Fair" or "Good" NLG System Behaviors

no code implementations23 Oct 2023 Li Lucy, Su Lin Blodgett, Milad Shokouhi, Hanna Wallach, Alexandra Olteanu

Fairness-related assumptions about what constitute appropriate NLG system behaviors range from invariance, where systems are expected to behave identically for social groups, to adaptation, where behaviors should instead vary across them.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.