no code implementations • EMNLP 2021 • Jeremiah Milbauer, Adarsh Mathew, James Evans
The Internet is home to thousands of communities, each with their own unique worldview and associated ideological differences.
no code implementations • 8 Oct 2024 • Anjali Kantharuban, Jeremiah Milbauer, Emma Strubell, Graham Neubig
We demonstrate that when people use large language models (LLMs) to generate recommendations, the LLMs produce responses that reflect both what the user wants and who the user is.
1 code implementation • 6 Oct 2023 • Jeremiah Milbauer, Ziqi Ding, Zhijin Wu, Tongshuang Wu
Reading and understanding the stories in the news is increasingly difficult.
no code implementations • 19 Jul 2023 • Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, Ziqi Ding, Bill Guo, Sireesh Gururaja, Tzu-Sheng Kuo, Jenny T. Liang, Ryan Liu, Ihita Mandal, Jeremiah Milbauer, Xiaolin Ni, Namrata Padmanabhan, Subhashini Ramkumar, Alexis Sudjianto, Jordan Taylor, Ying-Jui Tseng, Patricia Vaidos, Zhijin Wu, Wei Wu, Chenyang Yang
We reflect on human and LLMs' different sensitivities to instructions, stress the importance of enabling human-facing safeguards for LLMs, and discuss the potential of training humans and LLMs with complementary skill sets.
no code implementations • 31 May 2023 • Jeremiah Milbauer, Annie Louis, Mohammad Javad Hosseini, Alex Fabrikant, Donald Metzler, Tal Schuster
Transformer encoders contextualize token representations by attending to all other tokens at each layer, leading to quadratic increase in compute effort with the input length.