no code implementations • 21 Mar 2024 • Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A. Alghamdi, Tal August, Avinash Bhat, Madiha Zahrah Choksi, Senjuti Dutta, Jin L. C. Guo, Md Naimul Hoque, Yewon Kim, Simon Knight, Seyed Parsa Neshaei, Agnia Sergeyuk, Antonette Shibani, Disha Shrivastava, Lila Shroff, Jessi Stark, Sarah Sterman, Sitong Wang, Antoine Bosselut, Daniel Buschek, Joseph Chee Chang, Sherol Chen, Max Kreminski, Joonsuk Park, Roy Pea, Eugenia H. Rho, Shannon Zejiang Shen, Pao Siangliulue
In our era of rapid technological advancement, the research landscape for writing assistants has become increasingly fragmented across various research communities.
no code implementations • 27 Feb 2024 • Senjuti Dutta, Sherol Chen, Sunny Mak, Amnah Ahmad, Katherine Collins, Alena Butryna, Deepak Ramachandran, Krishnamurthy Dvijotham, Ellie Pavlick, Ravi Rajakumar
Image generation models are poised to become ubiquitous in a range of applications.
no code implementations • 1 Nov 2023 • Senjuti Dutta, Sid Mittal, Sherol Chen, Deepak Ramachandran, Ravi Rajakumar, Ian Kivlichan, Sunny Mak, Alena Butryna, Praveen Paritosh
The prevalence and impact of toxic discussions online have made content moderation crucial. Automated systems can play a vital role in identifying toxicity, and reducing the reliance on human moderation. Nevertheless, identifying toxic comments for diverse communities continues to present challenges that are addressed in this paper. The two-part goal of this study is to(1)identify intuitive variances from annotator disagreement using quantitative analysis and (2)model the subjectivity of these viewpoints. To achieve our goal, we published a new dataset\footnote{\url{https://github. com/XXX}} with expert annotators' annotations and used two other public datasets to identify the subjectivity of toxicity. Then leveraging the Large Language Model(LLM), we evaluate the model's ability to mimic diverse viewpoints on toxicity by varying size of the training data and utilizing same set of annotators as the test set used during model training and a separate set of annotators as the test set. We conclude that subjectivity is evident across all annotator groups, demonstrating the shortcomings of majority-rule voting.
no code implementations • 18 Oct 2023 • Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths
Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields.
no code implementations • 13 Jul 2023 • Qiuyi, Zhang, Michael S. Lee, Sherol Chen
Beliefs and values are increasingly being incorporated into our AI systems through alignment processes, such as carefully curating data collection principles or regularizing the loss function used for training.
no code implementations • EACL 2021 • Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, Monica Dinalescu
Few shot learning with large language models has the potential to give individuals without formal machine learning training the access to a wide range of text to text models.