Search Results for author: Sarah Chen

Found 4 papers, 4 papers with code

Model Editing with Canonical Examples

1 code implementation9 Feb 2024 John Hewitt, Sarah Chen, Lanruo Lora Xie, Edward Adams, Percy Liang, Christopher D. Manning

The Backpack defines a large bank of sense vectors--a decomposition of the different uses of each word--which are weighted and summed to form the output logits of the model.

Language Modelling Model Editing

Can LLMs Follow Simple Rules?

1 code implementation6 Nov 2023 Norman Mu, Sarah Chen, Zifan Wang, Sizhe Chen, David Karamardian, Lulwa Aljeraisy, Basel Alomair, Dan Hendrycks, David Wagner

As Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner.

Representation Engineering: A Top-Down Approach to AI Transparency

1 code implementation2 Oct 2023 Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan Hendrycks

In this paper, we identify and characterize the emerging area of representation engineering (RepE), an approach to enhancing the transparency of AI systems that draws on insights from cognitive neuroscience.

Question Answering

Reducing Effects of Swath Gaps on Unsupervised Machine Learning Models for NASA MODIS Instruments

1 code implementation13 Jun 2021 Sarah Chen, Esther Cao, Anirudh Koul, Siddha Ganju, Satyarth Praveen, Meher Anand Kasam

We compare the model trained with our augmentation techniques on the swath gap-filled data with the model trained on the original swath gap-less data and note highly augmented performance.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.