Search Results for author: Mina Lee

Found 12 papers, 7 papers with code

Towards Explainable AI Writing Assistants for Non-native English Speakers

no code implementations5 Apr 2023 Yewon Kim, Mina Lee, Donghwi Kim, Sung-Ju Lee

We highlight the challenges faced by non-native speakers when using AI writing assistants to paraphrase text.

Evaluating Human-Language Model Interaction

1 code implementation19 Dec 2022 Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael Bernstein, Percy Liang

To evaluate human-LM interaction, we develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems and dimensions to consider when designing evaluation metrics.

Language Modelling Question Answering

TempLM: Distilling Language Models into Template-Based Generators

1 code implementation23 May 2022 Tianyi Zhang, Mina Lee, Lisa Li, Ende Shen, Tatsunori B. Hashimoto

While pretrained language models (PLMs) have greatly improved text generation, they have also been known to produce unfaithful or inappropriate content.

Text Generation

CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities

no code implementations18 Jan 2022 Mina Lee, Percy Liang, Qian Yang

Large language models (LMs) offer unprecedented language generation capabilities and exciting opportunities for interaction design.

Language Modelling Text Generation

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality

1 code implementation NAACL 2021 Mina Lee, Chris Donahue, Robin Jia, Alexander Iyabor, Percy Liang

We release a new benchmark for lexical substitution, the task of finding appropriate substitutes for a target word in a context.

Enabling Language Models to Fill in the Blanks

3 code implementations ACL 2020 Chris Donahue, Mina Lee, Percy Liang

We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.

Language Modelling Text Infilling

Deep Metric Learning Network using Proxies for Chromosome Classification in Karyotyping Test

no code implementations MIDL 2019 Hwejin Jung, Bogyu Park, Seungwoo Hyun, Hanwoong Kim, Jinah Lee, Junseok Seo, Sunyoung Koo, Mina Lee

To assist cytogeneticists in karyotyping, we introduce Proxy-ResNeXt-CBAM which is a metric learning based network using proxies with a convolutional block attention module (CBAM) designed for chromosome classification.

Image Retrieval Metric Learning +1

Learning Autocomplete Systems as a Communication Game

1 code implementation16 Nov 2019 Mina Lee, Tatsunori B. Hashimoto, Percy Liang

We study textual autocomplete---the task of predicting a full sentence from a partial sentence---as a human-machine communication game.

Sentence

Forecasting e-scooter substitution of direct and access trips by mode and distance

no code implementations21 Aug 2019 Mina Lee, Joseph Y. J. Chow, Gyugeun Yoon, Brian Yueshuai He

An e-scooter trip model is estimated from four U. S. cities: Portland, Austin, Chicago and New York City.

SPoC: Search-based Pseudocode to Code

1 code implementation NeurIPS 2019 Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, Percy Liang

Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation.

Program Synthesis Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.