1 code implementation • 11 Feb 2025 • Chenchen Gu, Xiang Lisa Li, Rohith Kuditipudi, Percy Liang, Tatsunori Hashimoto
We detect global cache sharing across users in seven API providers, including OpenAI, resulting in potential privacy leakage about users' prompts.
2 code implementations • 31 Jan 2025 • Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, Tatsunori Hashimoto
After supervised finetuning the Qwen2. 5-32B-Instruct language model on s1K and equipping it with budget forcing, our model s1-32B exceeds o1-preview on competition math questions by up to 27% (MATH and AIME24).
Ranked #4 on
Mathematical Reasoning
on AIME24
1 code implementation • 11 Jul 2024 • Xiang Lisa Li, Evan Zheran Liu, Percy Liang, Tatsunori Hashimoto
In this paper, we present three desiderata for a good benchmark for language models: (i) salience (e. g., knowledge about World War II is more salient than a random day in history), (ii) novelty (i. e., the benchmark reveals new trends in model rankings not shown by previous benchmarks), and (iii) difficulty (i. e., the benchmark should be difficult for existing models, leaving headroom for future improvement).
no code implementations • 27 Mar 2024 • Xiang Lisa Li, Urvashi Khandelwal, Kelvin Guu
Recent work has uncovered promising ways to extract well-calibrated confidence estimates from language models (LMs), where the model's confidence score reflects how likely it is to be correct.
1 code implementation • 7 Dec 2023 • Chenchen Gu, Xiang Lisa Li, Percy Liang, Tatsunori Hashimoto
Watermarking of language model outputs enables statistical detection of model-generated text, which can mitigate harms and misuses of language models.
no code implementations • 3 Oct 2023 • Xiang Lisa Li, Vaishnavi Shrivastava, Siyan Li, Tatsunori Hashimoto, Percy Liang
To improve the consistency of LMs, we propose to finetune on the filtered generator and validator responses that are GV-consistent, and call this approach consistency fine-tuning.
1 code implementation • NeurIPS 2023 • Jesse Mu, Xiang Lisa Li, Noah Goodman
Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient.
2 code implementations • 28 Dec 2022 • Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, Matei Zaharia
Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM).
1 code implementation • 19 Dec 2022 • Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael Bernstein, Percy Liang
To evaluate human-LM interaction, we develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems and dimensions to consider when designing evaluation metrics.
2 code implementations • 27 Oct 2022 • Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis
We propose contrastive decoding (CD), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint.
1 code implementation • 27 May 2022 • Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto
Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation.
no code implementations • 29 Sep 2021 • John Hewitt, Xiang Lisa Li, Sang Michael Xie, Benjamin Newman, Percy Liang
When finetuning a pretrained language model for natural language generation tasks, one is currently faced with a tradeoff.
2 code implementations • 16 Aug 2021 • Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang
AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.
13 code implementations • ACL 2021 • Xiang Lisa Li, Percy Liang
Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks.
2 code implementations • ACL (GEM) 2021 • Alexandra DeLucia, Aaron Mueller, Xiang Lisa Li, João Sedoc
Narrative generation is an open-ended NLP task in which a model generates a story given a prompt.
2 code implementations • ACL 2020 • Xiang Lisa Li, Alexander M. Rush
In this work, we consider augmenting neural generation models with discrete control states learned through a structured latent-variable approach.
1 code implementation • IJCNLP 2019 • Xiang Lisa Li, Jason Eisner
Pre-trained word embeddings like ELMo and BERT contain rich syntactic and semantic information, resulting in state-of-the-art performance on various tasks.
no code implementations • TACL 2019 • Xiang Lisa Li, Dingquan Wang, Jason Eisner
When the tree's yield is rendered as a written sentence, a string rewriting mechanism transduces the underlying marks into "surface" marks, which are part of the observed (surface) string but should not be regarded as part of the tree.