Search Results for author: Kaitlyn Zhou

Found 5 papers, 2 papers with code

Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications

no code implementations13 May 2022 Kaitlyn Zhou, Su Lin Blodgett, Adam Trischler, Hal Daumé III, Kaheer Suleman, Alexandra Olteanu

There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult.

Text Generation

Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words

1 code implementation ACL 2022 Kaitlyn Zhou, Kawin Ethayarajh, Dallas Card, Dan Jurafsky

Cosine similarity of contextual embeddings is used in many NLP tasks (e. g., QA, IR, MT) and metrics (e. g., BERTScore).

Richer Countries and Richer Representations

1 code implementation Findings (ACL) 2022 Kaitlyn Zhou, Kawin Ethayarajh, Dan Jurafsky

We examine whether some countries are more richly represented in embedding space than others.

On the Opportunities and Risks of Foundation Models

no code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Frequency-based Distortions in Contextualized Word Embeddings

no code implementations17 Apr 2021 Kaitlyn Zhou, Kawin Ethayarajh, Dan Jurafsky

How does word frequency in pre-training data affect the behavior of similarity metrics in contextualized BERT embeddings?

Semantic Similarity Semantic Textual Similarity +1

Cannot find the paper you are looking for? You can Submit a new open access paper.