no code implementations • 20 Dec 2024 • Crystal Qian, Michael Xieyang Liu, Emily Reif, Grady Simon, Nada Hussein, Nathan Clement, James Wexler, Carrie J. Cai, Michael Terry, Minsuk Kahng
This paper explores the evolution of LLM adoption among practitioners at a large technology company, evaluating the impact of LLMs in data curation tasks through participants' perceptions, integration strategies, and reported usage scenarios.
no code implementations • 7 Mar 2024 • Savvas Petridis, Ben Wedin, Ann Yuan, James Wexler, Nithum Thain
We also show that we can improve overall performance by learning unique prompts for different semantic regions of the training data and using a mixture-of-experts (MoE) architecture to route inputs at inference time.
1 code implementation • 21 Feb 2024 • Emily Reif, Crystal Qian, James Wexler, Minsuk Kahng
Making sense of unstructured text datasets is perennially difficult, yet increasingly relevant with Large Language Models.
2 code implementations • 16 Feb 2024 • Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon
Automatic side-by-side evaluation has emerged as a promising approach to evaluating the quality of responses from large language models (LLMs).
no code implementations • 24 Oct 2023 • Savvas Petridis, Ben Wedin, James Wexler, Aaron Donsbach, Mahima Pushkarna, Nitesh Goyal, Carrie J. Cai, Michael Terry
Inspired by these findings, we developed ConstitutionMaker, an interactive tool for converting user feedback into principles, to steer LLM-based chatbots.
no code implementations • 13 Dec 2021 • Leon Sixt, Evan Zheran Liu, Marie Pellat, James Wexler, Milad Hashemi, Been Kim, Martin Maas
Machine Learning has been successfully applied in systems applications such as memory prefetching and caching, where learned models have been shown to outperform heuristics.
no code implementations • 16 Jun 2021 • Jessica Schrouff, Sebastien Baur, Shaobo Hou, Diana Mincu, Eric Loreaux, Ralph Blanes, James Wexler, Alan Karthikesalingam, Been Kim
While there are many methods focused on either one, few frameworks can provide both local and global explanations in a consistent manner.
1 code implementation • EMNLP 2020 • Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, Ann Yuan
We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models.
no code implementations • 14 Jul 2019 • Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
To make music composition more approachable, we designed the first AI-powered Google Doodle, the Bach Doodle, where users can create their own melody and have it harmonized by a machine learning model Coconet (Huang et al., 2017) in the style of Bach.
1 code implementation • 9 Jul 2019 • James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viegas, Jimbo Wilson
A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs.
2 code implementations • NeurIPS 2019 • Amirata Ghorbani, James Wexler, James Zou, Been Kim
Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.
1 code implementation • 13 Oct 2018 • Marzyeh Ghassemi, Mahima Pushkarna, James Wexler, Jesse Johnson, Paul Varghese
Making decisions about what clinical tasks to prepare for is multi-factored, and especially challenging in intensive care environments where resources must be balanced with patient needs.
Human-Computer Interaction
no code implementations • 24 Jan 2018 • Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Peter J. Liu, Xiaobing Liu, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Gavin E. Duggan, Gerardo Flores, Michaela Hardt, Jamie Irvine, Quoc Le, Kurt Litsch, Jake Marcus, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael Howell, Claire Cui, Greg Corrado, Jeff Dean
Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality.
11 code implementations • ICML 2018 • Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, Rory Sayres
The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state.