Search Results for author: James Zou

Found 157 papers, 76 papers with code

Explaining the Trump Gap in Social Distancing Using COVID Discourse

no code implementations EMNLP (NLP-COVID19) 2020 Austin Van Loon, Sheridan Stewart, Brandon Waldon, Shrinidhi K Lakshmikanth, Ishan Shah, Sharath Chandra Guntuku, Garrick Sherman, James Zou, Johannes Eichstaedt

Our ability to limit the future spread of COVID-19 will in part depend on our understanding of the psychological and sociological processes that lead people to follow or reject coronavirus health behaviors.

Word Embeddings

Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems

no code implementations4 Mar 2024 Lingjiao Chen, Jared Quincy Davis, Boris Hanin, Peter Bailis, Ion Stoica, Matei Zaharia, James Zou

We find empirically that across multiple language tasks, surprisingly, Voting Inference Systems' performance first increases but then decreases as a function of the number of LLM calls.

Language Modelling Large Language Model

Simple linear attention language models balance the recall-throughput tradeoff

1 code implementation28 Feb 2024 Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, Christopher Ré

In this work, we explore whether we can improve language model efficiency (e. g. by reducing memory consumption) without compromising on recall.

Language Modelling Text Generation

Large Language Models are Vulnerable to Bait-and-Switch Attacks for Generating Harmful Content

no code implementations21 Feb 2024 Federico Bianchi, James Zou

The risks derived from large language models (LLMs) generating deceptive and damaging content have been the subject of considerable research, but even safe generations can lead to problematic downstream impacts.

Prospector Heads: Generalized Feature Attribution for Large Models & Data

1 code implementation18 Feb 2024 Gautam Machiraju, Alexander Derry, Arjun Desai, Neel Guha, Amir-Hossein Karimi, James Zou, Russ Altman, Christopher Ré, Parag Mallick

Feature attribution, the ability to localize regions of the input data that are relevant for classification, is an important capability for machine learning models in scientific and biomedical domains.

How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis

1 code implementation8 Feb 2024 Federico Bianchi, Patrick John Chia, Mert Yuksekgonul, Jacopo Tagliabue, Dan Jurafsky, James Zou

We develop NegotiationArena: a flexible framework for evaluating and probing the negotiation abilities of LLM agents.

What's documented in AI? Systematic Analysis of 32K AI Model Cards

1 code implementation7 Feb 2024 Weixin Liang, Nazneen Rajani, Xinyu Yang, Ezinwanne Ozoani, Eric Wu, Yiqun Chen, Daniel Scott Smith, James Zou

To evaluate the impact of model cards, we conducted an intervention study by adding detailed model cards to 42 popular models which had no or sparse model cards previously.

Informativeness

Selecting Large Language Model to Fine-tune via Rectified Scaling Law

no code implementations4 Feb 2024 Haowei Lin, Baizhou Huang, Haotian Ye, Qinyu Chen, ZiHao Wang, Sujian Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang

The ever-growing ecosystem of LLMs has posed a challenge in selecting the most appropriate pre-trained model to fine-tune amidst a sea of options.

Language Modelling Large Language Model

Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution

1 code implementation29 Jan 2024 Ian Covert, Chanwoo Kim, Su-In Lee, James Zou, Tatsunori Hashimoto

Many tasks in explainable machine learning, such as data valuation and feature attribution, perform expensive computation for each data point and can be intractable for large datasets.

Data Valuation

Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on Hugging Face

1 code implementation24 Jan 2024 Xinyu Yang, Weixin Liang, James Zou

By analyzing all 7, 433 dataset documentation on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices, yielding 5 main findings: (1) The dataset card completion rate shows marked heterogeneity correlated with dataset popularity.

The complementary contributions of academia and industry to AI research

no code implementations4 Jan 2024 Lizhen Liang, Han Zhuang, James Zou, Daniel E. Acuna

Artificial intelligence (AI) has seen tremendous development in industry and academia.

Can AI Be as Creative as Humans?

no code implementations3 Jan 2024 Haonan Wang, James Zou, Michael Mozer, Anirudh Goyal, Alex Lamb, Linjun Zhang, Weijie J Su, Zhun Deng, Michael Qizhe Xie, Hannah Brown, Kenji Kawaguchi

With the rise of advanced generative AI models capable of tasks once reserved for human creativity, the study of AI's creative potential becomes imperative for its responsible development and application.

Learning and Forgetting Unsafe Examples in Large Language Models

no code implementations20 Dec 2023 Jiachen Zhao, Zhun Deng, David Madras, James Zou, Mengye Ren

As the number of large language models (LLMs) released to the public grows, there is a pressing need to understand the safety implications associated with these models learning from third-party custom finetuning data.

Zoology: Measuring and Improving Recall in Efficient Language Models

2 code implementations8 Dec 2023 Simran Arora, Sabri Eyuboglu, Aman Timalsina, Isys Johnson, Michael Poli, James Zou, Atri Rudra, Christopher Ré

To close the gap between synthetics and real language, we develop a new formalization of the task called multi-query associative recall (MQAR) that better reflects actual language.

GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts

no code implementations7 Dec 2023 Shirley Wu, Kaidi Cao, Bruno Ribeiro, James Zou, Jure Leskovec

Graph data are inherently complex and heterogeneous, leading to a high natural diversity of distributional shifts.

New Evaluation Metrics Capture Quality Degradation due to LLM Watermarking

no code implementations4 Dec 2023 Karanpartap Singh, James Zou

With the increasing use of large-language models (LLMs) like ChatGPT, watermarking has emerged as a promising approach for tracing machine-generated content.

Binary Classification

Data Acquisition: A New Frontier in Data-centric AI

no code implementations22 Nov 2023 Lingjiao Chen, Bilge Acun, Newsha Ardalani, Yifan Sun, Feiyang Kang, Hanrui Lyu, Yongchan Kwon, Ruoxi Jia, Carole-Jean Wu, Matei Zaharia, James Zou

As Machine Learning (ML) systems continue to grow, the demand for relevant and comprehensive datasets becomes imperative.

In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering

1 code implementation11 Nov 2023 Sheng Liu, Haotian Ye, Lei Xing, James Zou

On a new query, instead of adding demonstrations to the prompt, we shift the latent states of the LLM using the ICV.

In-Context Learning Style Transfer

ChatGPT Exhibits Gender and Racial Biases in Acute Coronary Syndrome Management

no code implementations10 Nov 2023 Angela Zhang, Mert Yuksekgonul, Joshua Guild, James Zou, Joseph C. Wu

One early application has been to medicine, where LLMs have been investigated to streamline clinical workflows and facilitate clinical analysis and decision-making.

Decision Making Management

Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges

1 code implementation6 Nov 2023 Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, Huaxiu Yao

To bridge this gap, we introduce a new benchmark, namely, the Bias and Interference Challenges in Visual Language Models (Bingo).

Hallucination

Can large language models provide useful feedback on research papers? A large-scale empirical analysis

1 code implementation3 Oct 2023 Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Ding, Xinyu Yang, Kailas Vodrahalli, Siyu He, Daniel Smith, Yian Yin, Daniel McFarland, James Zou

We first quantitatively compared GPT-4's generated feedback with human peer reviewer feedback in 15 Nature family journals (3, 096 papers in total) and the ICLR machine learning conference (1, 709 papers).

DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models

1 code implementation2 Oct 2023 Yongchan Kwon, Eric Wu, Kevin Wu, James Zou

Quantifying the impact of training data points is crucial for understanding the outputs of machine learning models and for improving the transparency of the AI pipeline.

Influence Approximation

Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions

1 code implementation14 Sep 2023 Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, James Zou

Training large language models to follow instructions makes them perform better on a wide range of tasks and generally become more helpful.

Large language models in medicine: the potentials and pitfalls

no code implementations31 Aug 2023 Jesutofunmi A. Omiye, Haiwen Gui, Shawheen J. Rezaei, James Zou, Roxana Daneshjou

Large language models (LLMs) have been applied to tasks in healthcare, ranging from medical exam questions to responding to patient questions.

Is your data alignable? Principled and interpretable alignability testing and integration of single-cell data

1 code implementation3 Aug 2023 Rong Ma, Eric D. Sun, David Donoho, James Zou

To overcome these limitations, we present a spectral manifold alignment and inference (SMAI) framework, which enables principled and interpretable alignability testing and structure-preserving integration of single-cell data with the same type of features.

Data Integration Imputation

How is ChatGPT's behavior changing over time?

4 code implementations18 Jul 2023 Lingjiao Chen, Matei Zaharia, James Zou

We find that the performance and behavior of both GPT-3. 5 and GPT-4 can vary greatly over time.

Code Generation Language Modelling +3

What Should Data Science Education Do with Large Language Models?

no code implementations6 Jul 2023 Xinming Tu, James Zou, Weijie J. Su, Linjun Zhang

LLMs can also play a significant role in the classroom as interactive teaching and learning tools, contributing to personalized education.

ArtWhisperer: A Dataset for Characterizing Human-AI Interactions in Artistic Creations

1 code implementation13 Jun 2023 Kailas Vodrahalli, James Zou

To study this interaction, we created ArtWhisperer, an online game where users are given a target image and are tasked with iteratively finding a prompt that creates a similar-looking image as the target.

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

1 code implementation27 May 2023 Yuhui Zhang, Michihiro Yasunaga, Zhengping Zhou, Jeff Z. HaoChen, James Zou, Percy Liang, Serena Yeung

Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data.

Negation Question Answering +1

FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance

no code implementations9 May 2023 Lingjiao Chen, Matei Zaharia, James Zou

There is a rapidly growing number of large language models (LLMs) that users can query for a fee.

Accuracy on the Curve: On the Nonlinear Correlation of ML Performance Between Data Subpopulations

1 code implementation4 May 2023 Weixin Liang, Yining Mao, Yongchan Kwon, Xinyu Yang, James Zou

Our work highlights the importance of understanding the nonlinear effects of model improvement on performance in different subpopulations, and has the potential to inform the development of more equitable and responsible machine learning models.

Fairness

Discover and Cure: Concept-aware Mitigation of Spurious Correlation

1 code implementation1 May 2023 Shirley Wu, Mert Yuksekgonul, Linjun Zhang, James Zou

Deep neural networks often rely on spurious correlations to make predictions, which hinders generalization beyond training environments.

Lesion Classification Object Recognition +1

Data-Driven Subgroup Identification for Linear Regression

1 code implementation29 Apr 2023 Zachary Izzo, Ruishan Liu, James Zou

To do this, simple parametric models are frequently used (e. g. coefficients of linear regression) but usually fitted on the whole dataset.

regression

Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value

2 code implementations16 Apr 2023 Yongchan Kwon, James Zou

As a result, it has been recognized as infeasible to apply to large datasets.

Data Valuation

Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks

2 code implementations8 Apr 2023 Yuzhen Mao, Zhun Deng, Huaxiu Yao, Ting Ye, Kenji Kawaguchi, James Zou

As machine learning has been deployed ubiquitously across applications in modern data science, algorithmic fairness has become a great concern.

Fairness Open-Ended Question Answering +1

GPT detectors are biased against non-native English writers

2 code implementations6 Apr 2023 Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, James Zou

In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers.

Fairness

Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data

1 code implementation13 Feb 2023 Ryumei Nakada, Halil Ibrahim Gulluk, Zhun Deng, Wenlong Ji, James Zou, Linjun Zhang

We show that the algorithm can detect the ground-truth pairs and improve performance by fully exploiting unpaired datasets.

Contrastive Learning

Diagnosing and Rectifying Vision Models using Language

1 code implementation8 Feb 2023 Yuhui Zhang, Jeff Z. HaoChen, Shih-Cheng Huang, Kuan-Chieh Wang, James Zou, Serena Yeung

Our proposed method can discover high-error data slices, identify influential attributes and further rectify undesirable model behaviors, without requiring any visual data.

Contrastive Learning

SkinCon: A skin disease dataset densely annotated by domain experts for fine-grained model debugging and analysis

no code implementations1 Feb 2023 Roxana Daneshjou, Mert Yuksekgonul, Zhuo Ran Cai, Roberto Novoa, James Zou

To provide a medical dataset densely annotated by domain experts with annotations useful across multiple disease processes, we developed SkinCon: a skin disease dataset densely annotated by dermatologists.

Interpretable Machine Learning

Provable Membership Inference Privacy

no code implementations12 Nov 2022 Zachary Izzo, Jinsung Yoon, Sercan O. Arik, James Zou

However, DP's strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning, and DP guarantees themselves can be difficult to interpret.

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

1 code implementation7 Nov 2022 Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

For example, we find cases of prompting for basic traits or social roles resulting in images reinforcing whiteness as ideal, prompting for occupations resulting in amplification of racial and gender disparities, and prompting for objects resulting in reification of American norms.

Text-to-Image Generation

A Spectral Method for Assessing and Combining Multiple Data Visualizations

1 code implementation25 Oct 2022 Rong Ma, Eric D. Sun, James Zou

Then it leverages the eigenscores to obtain a consensus visualization, which has much improved { quality over the individual visualizations in capturing the underlying true data structure.}

Data Visualization Dimensionality Reduction

Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise

1 code implementation20 Oct 2022 Haotian Ye, James Zou, Linjun Zhang

This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment.

Representation Learning

C-Mixup: Improving Generalization in Regression

1 code implementation11 Oct 2022 Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn

In this paper, we propose a simple yet powerful algorithm, C-Mixup, to improve generalization on regression tasks.

regression

SEAL : Interactive Tool for Systematic Error Analysis and Labeling

no code implementations11 Oct 2022 Nazneen Rajani, Weixin Liang, Lingjiao Chen, Meg Mitchell, James Zou

With the advent of Transformers, large language models (LLMs) have saturated well-known NLP benchmarks and leaderboards with high aggregate performance.

Knowledge-Driven New Drug Recommendation

no code implementations11 Oct 2022 Zhenbang Wu, Huaxiu Yao, Zhe Su, David M Liebovitz, Lucas M Glass, James Zou, Chelsea Finn, Jimeng Sun

However, newly approved drugs do not have much historical prescription data and cannot leverage existing drug recommendation methods.

Few-Shot Learning Multi-Label Classification

When and why vision-language models behave like bags-of-words, and what to do about it?

1 code implementation4 Oct 2022 Mert Yuksekgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, James Zou

ARO consists of Visual Genome Attribution, to test the understanding of objects' properties; Visual Genome Relation, to test for relational understanding; and COCO & Flickr30k-Order, to test for order sensitivity.

Contrastive Learning Retrieval +1

Data Budgeting for Machine Learning

no code implementations3 Oct 2022 Xinyi Zhao, Weixin Liang, James Zou

Data is the fuel powering AI and creates tremendous value for many domains.

Ensembling improves stability and power of feature selection for deep learning models

no code implementations2 Oct 2022 Prashnna K Gyawali, Xiaoxia Liu, James Zou, Zihuai He

Despite extensive recent efforts to define different feature importance metrics for deep learning models, we identified that inherent stochasticity in the design and training of deep learning models makes commonly used feature importance scores unstable.

Feature Importance feature selection

WeightedSHAP: analyzing and improving Shapley based feature attributions

1 code implementation27 Sep 2022 Yongchan Kwon, James Zou

On several real-world datasets, we demonstrate that the influential features identified by WeightedSHAP are better able to recapitulate the model's predictions compared to the features identified by the Shapley value.

HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions

1 code implementation18 Sep 2022 Lingjiao Chen, Zhihua Jin, Sabri Eyuboglu, Christopher Ré, Matei Zaharia, James Zou

HAPI is the first large-scale dataset of ML API usages and is a unique resource for studying ML-as-a-service (MLaaS).

object-detection Object Detection +4

Estimating and Explaining Model Performance When Both Covariates and Labels Shift

no code implementations18 Sep 2022 Lingjiao Chen, Matei Zaharia, James Zou

We further propose SEES, an algorithmic framework to characterize the distribution shift under SJS and to estimate a model's performance on new data without any labels.

Development and Clinical Evaluation of an AI Support Tool for Improving Telemedicine Photo Quality

1 code implementation12 Sep 2022 Kailas Vodrahalli, Justin Ko, Albert S. Chiou, Roberto Novoa, Abubakar Abid, Michelle Phung, Kiana Yekrang, Paige Petrone, James Zou, Roxana Daneshjou

To address this issue, we developed TrueImage 2. 0, an artificial intelligence (AI) model for assessing patient photo quality for telemedicine and providing real-time feedback to patients for photo quality improvement.

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

3 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

no code implementations6 Jun 2022 Zhun Deng, Jiayao Zhang, Linjun Zhang, Ting Ye, Yates Coley, Weijie J. Su, James Zou

Specifically, FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses.

Classification Fairness

Post-hoc Concept Bottleneck Models

no code implementations31 May 2022 Mert Yuksekgonul, Maggie Wang, James Zou

When concept annotations are not available on the training data, we show that PCBM can transfer concepts from other datasets or from natural language descriptions of concepts via multimodal models.

Model Editing

A Unified f-divergence Framework Generalizing VAE and GAN

no code implementations11 May 2022 Jaime Roquero Gimenez, James Zou

Developing deep generative models that flexibly incorporate diverse measures of probability distance is an important area of research.

Improving genetic risk prediction across diverse population by disentangling ancestry representations

no code implementations10 May 2022 Prashnna K Gyawali, Yann Le Guen, Xiaoxia Liu, Hua Tang, James Zou, Zihuai He

This can lead to biases in the risk predictors resulting in poor generalization when applied to minority populations and admixed individuals such as African Americans.

Genetic Risk Prediction

Domino: Discovering Systematic Errors with Cross-Modal Embeddings

2 code implementations ICLR 2022 Sabri Eyuboglu, Maya Varma, Khaled Saab, Jean-Benoit Delbrouck, Christopher Lee-Messer, Jared Dunnmon, James Zou, Christopher Ré

In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1, 235 slice discovery settings in three input domains (natural images, medical images, and time-series data).

Representation Learning Time Series Analysis

Disparities in Dermatology AI Performance on a Diverse, Curated Clinical Image Set

no code implementations15 Mar 2022 Roxana Daneshjou, Kailas Vodrahalli, Roberto A Novoa, Melissa Jenkins, Weixin Liang, Veronica Rotemberg, Justin Ko, Susan M Swetter, Elizabeth E Bailey, Olivier Gevaert, Pritam Mukherjee, Michelle Phung, Kiana Yekrang, Bradley Fong, Rachna Sahasrabudhe, Johan A. C. Allerup, Utako Okata-Karigane, James Zou, Albert Chiou

To ascertain potential biases in algorithm performance in this context, we curated the Diverse Dermatology Images (DDI) dataset-the first publicly available, expertly curated, and pathologically confirmed image dataset with diverse skin tones.

Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

2 code implementations3 Mar 2022 Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, James Zou

Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization.

Contrastive Learning Fairness +2

MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts

1 code implementation ICLR 2022 Weixin Liang, James Zou

We present MetaShift--a collection of 12, 868 sets of natural images across 410 classes--to address this challenge.

Benchmarking

Uncalibrated Models Can Improve Human-AI Collaboration

1 code implementation12 Feb 2022 Kailas Vodrahalli, Tobias Gerstenberg, James Zou

In this paper, we present an initial exploration that suggests showing AI models as more confident than they actually are, even when the original AI is well-calibrated, can improve human-AI performance (measured as the accuracy and confidence of the human's final prediction after seeing the AI advice).

Decision Making

Competition over data: how does data purchase affect users?

no code implementations26 Jan 2022 Yongchan Kwon, Antonio Ginart, James Zou

We introduce a new environment that allows ML predictors to use active learning algorithms to purchase labeled data within their budgets while competing against each other to attract users.

Active Learning

Submix: Practical Private Prediction for Large-Scale Language Models

no code implementations4 Jan 2022 Antonio Ginart, Laurens van der Maaten, James Zou, Chuan Guo

Recent data-extraction attacks have exposed that language models can memorize some training samples verbatim.

Language Modelling

Improving Out-of-Distribution Robustness via Selective Augmentation

2 code implementations2 Jan 2022 Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn

Machine learning algorithms typically assume that training and test examples are drawn from the same distribution.

How to Learn when Data Gradually Reacts to Your Model

no code implementations13 Dec 2021 Zachary Izzo, James Zou, Lexing Ying

A recent line of work has focused on training machine learning (ML) models in the performative setting, i. e. when the data distribution reacts to the deployed model.

Explaining medical AI performance disparities across sites with confounder Shapley value analysis

no code implementations12 Nov 2021 Eric Wu, Kevin Wu, James Zou

Medical AI algorithms can often experience degraded performance when evaluated on previously unseen sites.

Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics

no code implementations10 Nov 2021 Amirata Ghorbani, Dina Berenbaum, Maor Ivgi, Yuval Dafna, James Zou

We address this limitation by introducing Feature Vectors, a new global interpretability method designed for tabular datasets.

Feature Importance

Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning

2 code implementations26 Oct 2021 Yongchan Kwon, James Zou

Data Shapley has recently been proposed as a principled framework to quantify the contribution of individual datum in machine learning.

BIG-bench Machine Learning Data Valuation

CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq

no code implementations13 Oct 2021 Bryan He, Matthew Thomson, Meena Subramaniam, Richard Perez, Chun Jimmie Ye, James Zou

Predicting phenotype from scRNA-seq is challenging for standard machine learning methods -- the number of cells measured can vary by orders of magnitude across individuals and the cell populations are also highly heterogeneous.

Interpretable Machine Learning

Clustering Plotted Data by Image Segmentation

1 code implementation CVPR 2022 Tarek Naous, Srinjay Sarkar, Abubakar Abid, James Zou

We describe the method and compare it to ten other clustering methods on synthetic data to illustrate its advantages and disadvantages.

Clustering Image Segmentation +3

The Power of Contrast for Feature Learning: A Theoretical Analysis

no code implementations6 Oct 2021 Wenlong Ji, Zhun Deng, Ryumei Nakada, James Zou, Linjun Zhang

Contrastive learning has achieved state-of-the-art performance in various self-supervised learning tasks and even outperforms its supervised counterpart.

Contrastive Learning Self-Supervised Learning +1

Language Models as Recommender Systems: Evaluations and Limitations

no code implementations NeurIPS Workshop ICBINB 2021 Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, Hao Wang

Pre-trained language models (PLMs) such as BERT and GPT learn general text representations and encode extensive world knowledge; thus, they can be efficiently and accurately adapted to various downstream tasks.

Movie Recommendation Session-Based Recommendations +1

Did the Model Change? Efficiently Assessing Machine Learning API Shifts

no code implementations29 Jul 2021 Lingjiao Chen, Tracy Cai, Matei Zaharia, James Zou

This motivated us to formulate the API shift assessment problem at a more fine-grained level as estimating how the API model's confusion matrix changes over time when the data distribution is constant.

BIG-bench Machine Learning

Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions

1 code implementation14 Jul 2021 Kailas Vodrahalli, Roxana Daneshjou, Tobias Gerstenberg, James Zou

In decision support applications of AI, the AI algorithm's output is framed as a suggestion to a human user.

Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations

1 code implementation24 Jun 2021 Abubakar Abid, Mert Yuksekgonul, James Zou

Understanding and explaining the mistakes made by trained models is critical to many machine learning objectives, such as improving robustness, addressing concept drift, and mitigating biases.

counterfactual

Group-Structured Adversarial Training

no code implementations18 Jun 2021 Farzan Farnia, Amirali Aghazadeh, James Zou, David Tse

Robust training methods against perturbations to the input data have received great attention in the machine learning literature.

Adversarial Training Helps Transfer Learning via Better Representations

no code implementations NeurIPS 2021 Zhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi, James Zou

Recent works empirically demonstrate that adversarial training in the source data can improve the ability of models to transfer to new domains.

Transfer Learning

MLDemon: Deployment Monitoring for Machine Learning Systems

no code implementations28 Apr 2021 Antonio Ginart, Martin Zhang, James Zou

Post-deployment monitoring of ML systems is critical for ensuring reliability, especially as new user inputs can differ from the training distribution.

BIG-bench Machine Learning

Data Shapley Valuation for Efficient Batch Active Learning

no code implementations16 Apr 2021 Amirata Ghorbani, James Zou, Andre Esteva

In this work, we introduce Active Data Shapley (ADS) -- a filtering layer for batch active learning that significantly increases the efficiency of active learning by pre-selecting, using a linear time computation, the highest-value points from an unlabeled dataset.

Active Learning

Efficient Online ML API Selection for Multi-Label Classification Tasks

no code implementations18 Feb 2021 Lingjiao Chen, Matei Zaharia, James Zou

In this work, we propose FrugalMCT, a principled framework that adaptively selects the APIs to use for different data in an online fashion while respecting user's budget.

General Classification Multi-Label Classification +7

How to Learn when Data Reacts to Your Model: Performative Gradient Descent

1 code implementation15 Feb 2021 Zachary Izzo, Lexing Ying, James Zou

Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution.

When and How Mixup Improves Calibration

no code implementations11 Feb 2021 Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou

In addition, we study how Mixup improves calibration in semi-supervised learning.

Data Augmentation

Persistent Anti-Muslim Bias in Large Language Models

1 code implementation14 Jan 2021 Abubakar Abid, Maheen Farooqi, James Zou

It has been observed that large-scale language models capture undesirable societal biases, e. g. relating to race and gender; yet religious bias has been relatively unexplored.

Adversarial Text Language Modelling +1

Neural Group Testing to Accelerate Deep Learning

1 code implementation21 Nov 2020 Weixin Liang, James Zou

A key challenge of neural group testing is to modify a deep neural network so that it could test multiple samples in one forward pass.

Data Valuation for Medical Imaging Using Shapley Value: Application on A Large-scale Chest X-ray Dataset

no code implementations15 Oct 2020 Siyi Tang, Amirata Ghorbani, Rikiya Yamashita, Sameer Rehman, Jared A. Dunnmon, James Zou, Daniel L. Rubin

In this study, we used data Shapley, a data valuation metric, to quantify the value of training data to the performance of a pneumonia detection algorithm in a large chest X-ray dataset.

Data Valuation Pneumonia Detection

How Does Mixup Help With Robustness and Generalization?

no code implementations ICLR 2021 Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou

For robustness, we show that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss.

Data Augmentation

ALICE: Active Learning with Contrastive Natural Language Explanations

no code implementations EMNLP 2020 Weixin Liang, James Zou, Zhou Yu

We propose Active Learning with Contrastive Explanations (ALICE), an expert-in-the-loop training framework that utilizes contrastive natural language explanations to improve data efficiency in learning.

Active Learning Classification +1

Competing AI: How does competition feedback affect machine learning?

no code implementations15 Sep 2020 Antonio Ginart, Eva Zhang, Yongchan Kwon, James Zou

A service that is more often queried by users, perhaps because it more accurately anticipates user preferences, is also more likely to obtain additional user data (e. g. in the form of a Yelp review).

BIG-bench Machine Learning

Improving Generalization in Meta-learning via Task Augmentation

1 code implementation26 Jul 2020 Huaxiu Yao, Long-Kai Huang, Linjun Zhang, Ying WEI, Li Tian, James Zou, Junzhou Huang, Zhenhui Li

Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.

Meta-Learning

Efficient computation and analysis of distributional Shapley values

no code implementations2 Jul 2020 Yongchan Kwon, Manuel A. Rivas, James Zou

Distributional data Shapley value (DShapley) has recently been proposed as a principled framework to quantify the contribution of individual datum in machine learning.

Binary Classification Density Estimation

Improving Adversarial Robustness via Unlabeled Out-of-Domain Data

no code implementations15 Jun 2020 Zhun Deng, Linjun Zhang, Amirata Ghorbani, James Zou

In this work, we investigate how adversarial robustness can be enhanced by leveraging out-of-domain unlabeled data.

Adversarial Robustness Data Augmentation +2

Improving Training on Noisy Stuctured Labels

no code implementations8 Mar 2020 Abubakar Abid, James Zou

Systematic experiments on image segmentation and text tagging demonstrate the strong performance of ECN in improving training on noisy structured labels.

Image Segmentation Segmentation +1

A Distributional Framework for Data Valuation

no code implementations ICML 2020 Amirata Ghorbani, Michael P. Kim, James Zou

Shapley value is a classic notion from game theory, historically used to quantify the contributions of individuals within groups, and more recently applied to assign values to data points when training machine learning models.

Data Valuation

Approximate Data Deletion from Machine Learning Models

no code implementations24 Feb 2020 Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, James Zou

Deleting data from a trained machine learning (ML) model is a critical task in many applications.

BIG-bench Machine Learning

Neuron Shapley: Discovering the Responsible Neurons

1 code implementation NeurIPS 2020 Amirata Ghorbani, James Zou

We develop Neuron Shapley as a new framework to quantify the contribution of individual neurons to the prediction and performance of a deep network.

Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

no code implementations9 Oct 2019 Gal Yona, Amirata Ghorbani, James Zou

We propose Extended Shapley as a principled framework for this problem, and experiment empirically with how it can be used to address questions of ML accountability.

Learning transport cost from subset correspondence

no code implementations ICLR 2020 Ruishan Liu, Akshay Balsubramani, James Zou

Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a transport cost function that accurately captures how the two datasets are related.

Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems

6 code implementations25 Sep 2019 Antonio Ginart, Maxim Naumov, Dheevatsa Mudigere, Jiyan Yang, James Zou

Embedding representations power machine intelligence in many applications, including recommendation systems, but they are space intensive -- potentially occupying hundreds of gigabytes in large-scale settings.

Click-Through Rate Prediction Collaborative Filtering +1

LitGen: Genetic Literature Recommendation Guided by Human Explanations

1 code implementation24 Sep 2019 Allen Nie, Arturo L. Pineda, Matt W. Wright Hannah Wand, Bryan Wulf, Helio A. Costa, Ronak Y. Patel, Carlos D. Bustamante, James Zou

In collaboration with the Clinical Genomic Resource (ClinGen)---the flagship NIH program for clinical curation---we propose the first machine learning system, LitGen, that can retrieve papers for a particular variant and filter them by specific evidence types used by curators to assess for pathogenicity.

Making AI Forget You: Data Deletion in Machine Learning

4 code implementations NeurIPS 2019 Antonio Ginart, Melody Y. Guan, Gregory Valiant, James Zou

Intense recent discussions have focused on how to provide individuals with control over when their data can and cannot be used --- the EU's Right To Be Forgotten regulation is an example of this effort.

BIG-bench Machine Learning Clustering

Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

1 code implementation6 Jun 2019 Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, James Zou

Their feedback identified that Gradio should support a variety of interfaces and frameworks, allow for easy sharing of the interface, allow for input manipulation and interactive inference by the domain expert, as well as allow embedding the interface in iPython notebooks.

BIG-bench Machine Learning

Discovering Conditionally Salient Features with Statistical Guarantees

no code implementations29 May 2019 Jaime Roquero Gimenez, James Zou

Most of the work in this domain has focused on identifying globally relevant features, which are features that are related to the outcome using evidence across the entire dataset.

feature selection

A Knowledge Graph-based Approach for Exploring the U.S. Opioid Epidemic

no code implementations27 May 2019 Maulik R. Kamdar, Tymor Hamamsy, Shea Shelton, Ayin Vala, Tome Eftimov, James Zou, Suzanne Tamang

Statistical learning methods that use data from multiple clinical centers across the US to detect opioid over-prescribing trends and predict possible opioid misuse are required.

Data Shapley: Equitable Valuation of Data for Machine Learning

5 code implementations5 Apr 2019 Amirata Ghorbani, James Zou

As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions.

BIG-bench Machine Learning Data Valuation +1

Analyzing Polarization in Social Media: Method and Application to Tweets on 21 Mass Shootings

1 code implementation NAACL 2019 Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Matthew Gentzkow, Jesse Shapiro, Dan Jurafsky

We provide an NLP framework to uncover four linguistic dimensions of political polarization in social media: topic choice, framing, affect and illocutionary force.

Clustering

Contrastive Variational Autoencoder Enhances Salient Features

1 code implementation12 Feb 2019 Abubakar Abid, James Zou

The cVAE explicitly models latent features that are shared between the datasets, as well as those that are enriched in one dataset relative to the other, which allows the algorithm to isolate and enhance the salient latent features.

Contrastive Learning

Towards Automatic Concept-based Explanations

2 code implementations NeurIPS 2019 Amirata Ghorbani, James Wexler, James Zou, Been Kim

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.

Feature Importance

Concrete Autoencoders for Differentiable Feature Selection and Reconstruction

2 code implementations27 Jan 2019 Abubakar Abid, Muhammad Fatih Balin, James Zou

We introduce the concrete autoencoder, an end-to-end differentiable method for global feature selection, which efficiently identifies a subset of the most informative features and simultaneously learns a neural network to reconstruct the input data from the selected features.

feature selection General Classification +1

Large-scale Generative Modeling to Improve Automated Veterinary Disease Coding

no code implementations29 Nov 2018 Yuhui Zhang, Allen Nie, James Zou

We compare the performance of our model with several baselines in a challenging cross-hospital setting with substantial domain shift.

Minimizing Close-k Aggregate Loss Improves Classification

1 code implementation1 Nov 2018 Bryan He, James Zou

In classification, the de facto method for aggregating individual losses is the average loss.

Classification General Classification

Contrastive Multivariate Singular Spectrum Analysis

no code implementations31 Oct 2018 Abdi-Hakin Dirie, Abubakar Abid, James Zou

We introduce Contrastive Multivariate Singular Spectrum Analysis, a novel unsupervised method for dimensionality reduction and signal decomposition of time series data.

Clustering Dimensionality Reduction +2

Improving the Stability of the Knockoff Procedure: Multiple Simultaneous Knockoffs and Entropy Maximization

no code implementations26 Oct 2018 Jaime Roquero Gimenez, James Zou

The Model-X knockoff procedure has recently emerged as a powerful approach for feature selection with statistical guarantees.

feature selection

Autowarp: Learning a Warping Distance from Unlabeled Time Series Using Sequence Autoencoders

no code implementations NeurIPS 2018 Abubakar Abid, James Zou

We define a flexible and differentiable family of warping metrics, which encompasses common metrics such as DTW, Euclidean, and edit distance.

Astronomy Dynamic Time Warping +2

Knockoffs for the mass: new feature importance statistics with false discovery guarantees

no code implementations17 Jul 2018 Jaime Roquero Gimenez, Amirata Ghorbani, James Zou

This is often impossible to do from purely observational data, and a natural relaxation is to identify features that are correlated with the outcome even conditioned on all other observed features.

Feature Importance valid

DeepTag: inferring all-cause diagnoses from clinical notes in under-resourced medical domain

1 code implementation28 Jun 2018 Allen Nie, Ashley Zehnder, Rodney L. Page, Arturo L. Pineda, Manuel A. Rivas, Carlos D. Bustamante, James Zou

However, clinicians lack the time and resource to annotate patient records with standard medical diagnostic codes and most veterinary visits are captured in free text notes.

Multiaccuracy: Black-Box Post-Processing for Fairness in Classification

1 code implementation31 May 2018 Michael P. Kim, Amirata Ghorbani, James Zou

Prediction systems are successfully deployed in applications ranging from disease diagnosis, to predicting credit worthiness, to image recognition.

Classification Fairness +2

Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for Optimizing Protein Functions

no code implementations5 Apr 2018 Anvita Gupta, James Zou

We propose a novel feedback-loop architecture, called Feedback GAN (FBGAN), to optimize the synthetic gene sequences for desired properties using an external function analyzer.

Stochastic EM for Shuffled Linear Regression

no code implementations2 Apr 2018 Abubakar Abid, James Zou

We consider the problem of inference in a linear regression model in which the relative ordering of the input features and output labels is not known.

regression

CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions

1 code implementation ICML 2018 Kevin Tian, Teng Zhang, James Zou

However, in addition to the text data itself, we often have additional covariates associated with individual corpus documents---e. g. the demographic of the author, time and venue of publication---and we would like the embedding to naturally capture this information.

Natural Questions Tensor Decomposition

Learning Covariate-Specific Embeddings with Tensor Decompositions

no code implementations ICLR 2018 Kevin Tian, Teng Zhang, James Zou

In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e. g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates.

Natural Questions Tensor Decomposition +1

From Information Bottleneck To Activation Norm Penalty

no code implementations ICLR 2018 Allen Nie, Mihir Mongia, James Zou

Recently, a regularization method has been proposed to optimize the variational lower bound of the Information Bottleneck Lagrangian.

General Classification Image Classification +1

INTERPRETATION OF NEURAL NETWORK IS FRAGILE

no code implementations ICLR 2018 Amirata Ghorbani, Abubakar Abid, James Zou

In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations.

BIG-bench Machine Learning Feature Importance

Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes

1 code implementation22 Nov 2017 Nikhil Garg, Londa Schiebinger, Dan Jurafsky, James Zou

Word embeddings use vectors to represent words such that the geometry between vectors captures semantic relationship between the words.

Word Embeddings

NeuralFDR: Learning Discovery Thresholds from Hypothesis Features

1 code implementation NeurIPS 2017 Fei Xia, Martin J. Zhang, James Zou, David Tse

For example, in genetic association studies, each hypothesis tests the correlation between a variant and the trait.

Interpretation of Neural Networks is Fragile

2 code implementations29 Oct 2017 Amirata Ghorbani, Abubakar Abid, James Zou

In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations.

BIG-bench Machine Learning Feature Importance

The Effects of Memory Replay in Reinforcement Learning

1 code implementation18 Oct 2017 Ruishan Liu, James Zou

We show that even in this very simple setting, the amount of memory kept can substantially affect the agent's performance.

Q-Learning reinforcement-learning +1

Contrastive Principal Component Analysis

1 code implementation20 Sep 2017 Abubakar Abid, Martin J. Zhang, Vivek K. Bagaria, James Zou

We present a new technique called contrastive principal component analysis (cPCA) that is designed to discover low-dimensional structure that is unique to a dataset, or enriched in one dataset relative to other data.

Denoising feature selection +1

Why Adaptively Collected Data Have Negative Bias and How to Correct for It

no code implementations7 Aug 2017 Xinkun Nie, Xiaoying Tian, Jonathan Taylor, James Zou

In this paper, we prove that when the data collection procedure satisfies natural conditions, then sample means of the data have systematic \emph{negative} biases.

Learning Latent Space Models with Angular Constraints

no code implementations ICML 2017 Pengtao Xie, Yuntian Deng, Yi Zhou, Abhimanu Kumar, Yao-Liang Yu, James Zou, Eric P. Xing

The large model capacity of latent space models (LSMs) enables them to achieve great performance on various applications, but meanwhile renders LSMs to be prone to overfitting.

Estimating the unseen from multiple populations

2 code implementations ICML 2017 Aditi Raghunathan, Greg Valiant, James Zou

We generalize this extrapolation and related unseen estimation problems to the multiple population setting, where population $j$ has an unknown distribution $D_j$ from which we observe $n_j$ samples.

Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context

no code implementations WS 2017 Shyam Upadhyay, Kai-Wei Chang, Matt Taddy, Adam Kalai, James Zou

We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i. e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner.

Representation Learning Word Embeddings

Linear Regression with Shuffled Labels

no code implementations3 May 2017 Abubakar Abid, Ada Poon, James Zou

We study the regimes in which each estimator excels, and generalize the estimators to the setting where partial ordering information is available in the form of experiments replicated independently.

regression

Quantifying and Reducing Stereotypes in Word Embeddings

no code implementations20 Jun 2016 Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai

Machine learning algorithms are optimized to model statistical properties of the training data.

Word Embeddings

Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation

no code implementations19 Jun 2016 Akash Srivastava, James Zou, Ryan P. Adams, Charles Sutton

A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria.

Clustering

Quantifying the accuracy of approximate diffusions and Markov chains

no code implementations20 May 2016 Jonathan H. Huggins, James Zou

As an illustration, we apply our framework to derive finite-sample error bounds of approximate unadjusted Langevin dynamics.

Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation

no code implementations22 Feb 2016 Akash Srivastava, James Zou, Charles Sutton

A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria.

Clustering Computational Efficiency

How much does your data exploration overfit? Controlling bias via information usage

no code implementations16 Nov 2015 Daniel Russo, James Zou

But while %the adaptive nature of exploration any data-exploration renders standard statistical theory invalid, experience suggests that different types of exploratory analysis can lead to disparate levels of bias, and the degree of bias also depends on the particulars of the data set.

Clustering

Rich Component Analysis

no code implementations14 Jul 2015 Rong Ge, James Zou

In this paper, we develop the general framework of Rich Component Analysis (RCA) to model settings where the observations from different views are driven by different sets of latent components, and each component can be a complex, high-dimensional distribution.

Intersecting Faces: Non-negative Matrix Factorization With New Guarantees

no code implementations8 Jul 2015 Rong Ge, James Zou

A plethora of algorithms have been developed to tackle NMF, but due to the non-convex nature of the problem, there is little guarantee on how well these methods work.

Cannot find the paper you are looking for? You can Submit a new open access paper.