Search Results for author: Xiang Lisa Li

Found 15 papers, 11 papers with code

Few-Shot Recalibration of Language Models

no code implementations27 Mar 2024 Xiang Lisa Li, Urvashi Khandelwal, Kelvin Guu

Recent work has uncovered promising ways to extract well-calibrated confidence estimates from language models (LMs), where the model's confidence score reflects how likely it is to be correct.

Math

On the Learnability of Watermarks for Language Models

1 code implementation7 Dec 2023 Chenchen Gu, Xiang Lisa Li, Percy Liang, Tatsunori Hashimoto

Watermarking of language model outputs enables statistical detection of model-generated text, which has many applications in the responsible deployment of language models.

Language Modelling

Benchmarking and Improving Generator-Validator Consistency of Language Models

no code implementations3 Oct 2023 Xiang Lisa Li, Vaishnavi Shrivastava, Siyan Li, Tatsunori Hashimoto, Percy Liang

To improve the consistency of LMs, we propose to finetune on the filtered generator and validator responses that are GV-consistent, and call this approach consistency fine-tuning.

Benchmarking Instruction Following +1

Learning to Compress Prompts with Gist Tokens

1 code implementation NeurIPS 2023 Jesse Mu, Xiang Lisa Li, Noah Goodman

Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient.

Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP

2 code implementations28 Dec 2022 Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, Matei Zaharia

Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM).

In-Context Learning Language Modelling +2

Evaluating Human-Language Model Interaction

1 code implementation19 Dec 2022 Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael Bernstein, Percy Liang

To evaluate human-LM interaction, we develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems and dimensions to consider when designing evaluation metrics.

Language Modelling Question Answering

Contrastive Decoding: Open-ended Text Generation as Optimization

2 code implementations27 Oct 2022 Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis

We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint.

Language Modelling Text Generation

Diffusion-LM Improves Controllable Text Generation

1 code implementation27 May 2022 Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto

Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation.

Language Modelling Sentence +1

Ensembles and Cocktails: Robust Finetuning for Natural Language Generation

no code implementations29 Sep 2021 John Hewitt, Xiang Lisa Li, Sang Michael Xie, Benjamin Newman, Percy Liang

When finetuning a pretrained language model for natural language generation tasks, one is currently faced with a tradeoff.

Language Modelling Text Generation

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Prefix-Tuning: Optimizing Continuous Prompts for Generation

10 code implementations ACL 2021 Xiang Lisa Li, Percy Liang

Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks.

Language Modelling Table-to-Text Generation

Posterior Control of Blackbox Generation

2 code implementations ACL 2020 Xiang Lisa Li, Alexander M. Rush

In this work, we consider augmenting neural generation models with discrete control states learned through a structured latent-variable approach.

Text Generation

Specializing Word Embeddings (for Parsing) by Information Bottleneck

1 code implementation IJCNLP 2019 Xiang Lisa Li, Jason Eisner

Pre-trained word embeddings like ELMo and BERT contain rich syntactic and semantic information, resulting in state-of-the-art performance on various tasks.

Dimensionality Reduction POS +2

A Generative Model for Punctuation in Dependency Trees

no code implementations TACL 2019 Xiang Lisa Li, Dingquan Wang, Jason Eisner

When the tree's yield is rendered as a written sentence, a string rewriting mechanism transduces the underlying marks into "surface" marks, which are part of the observed (surface) string but should not be regarded as part of the tree.

Punctuation Restoration Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.