Search Results for author: Rylan Schaeffer

Found 16 papers, 1 papers with code

Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data

no code implementations1 Apr 2024 Matthias Gerstgrasser, Rylan Schaeffer, Apratim Dey, Rafael Rafailov, Henry Sleight, John Hughes, Tomasz Korbak, Rajashree Agrawal, Dhruv Pai, Andrey Gromov, Daniel A. Roberts, Diyi Yang, David L. Donoho, Sanmi Koyejo

The proliferation of generative models, combined with pretraining on web-scale data, raises a timely question: what happens when these models are trained on their own generated outputs?

Image Generation

Bridging Associative Memory and Probabilistic Modeling

no code implementations15 Feb 2024 Rylan Schaeffer, Nika Zahedi, Mikail Khona, Dhruv Pai, Sang Truong, Yilun Du, Mitchell Ostrow, Sarthak Chandra, Andres Carranza, Ila Rani Fiete, Andrey Gromov, Sanmi Koyejo

Based on the observation that associative memory's energy functions can be seen as probabilistic modeling's negative log likelihoods, we build a bridge between the two that enables useful flow of ideas in both directions.

In-Context Learning

Investigating Data Contamination for Pre-training Language Models

no code implementations11 Jan 2024 Minhao Jiang, Ken Ziyu Liu, Ming Zhong, Rylan Schaeffer, Siru Ouyang, Jiawei Han, Sanmi Koyejo

Language models pre-trained on web-scale corpora demonstrate impressive capabilities on diverse downstream tasks.

Language Modelling

Disentangling Fact from Grid Cell Fiction in Trained Deep Path Integrators

no code implementations6 Dec 2023 Rylan Schaeffer, Mikail Khona, Sanmi Koyejo, Ila Rani Fiete

Work on deep learning-based models of grid cells suggests that grid cells generically and robustly arise from optimizing networks to path integrate, i. e., track one's spatial position by integrating self-velocity signals.

What Causes Polysemanticity? An Alternative Origin Story of Mixed Selectivity from Incidental Causes

no code implementations5 Dec 2023 Victor Lecomte, Kushal Thaman, Rylan Schaeffer, Naomi Bashkansky, Trevor Chow, Sanmi Koyejo

Using a combination of theory and experiments, we show that incidental polysemanticity can arise due to multiple reasons including regularization and neural noise; this incidental polysemanticity occurs because random initialization can, by chance alone, initially assign multiple features to the same neuron, and the training dynamics then strengthen such overlap.

Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells

no code implementations27 Nov 2023 Rylan Schaeffer, Mikail Khona, Adrian Bertagnoli, Sanmi Koyejo, Ila Rani Fiete

At both the population and single-cell levels, we find evidence suggesting that neither of the assumptions are likely true in biological neural representations.

Pretraining on the Test Set Is All You Need

no code implementations13 Sep 2023 Rylan Schaeffer

Inspired by recent work demonstrating the promise of smaller Transformer-based language models pretrained on carefully curated data, we supercharge such approaches by investing heavily in curating a novel, high quality, non-synthetic data mixture based solely on evaluation benchmarks.

FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation

no code implementations20 Jul 2023 Dhruv Pai, Andres Carranza, Rylan Schaeffer, Arnuv Tandon, Sanmi Koyejo

We present FACADE, a novel probabilistic and geometric framework designed for unsupervised mechanistic anomaly detection in deep neural networks.

Anomaly Detection

Deceptive Alignment Monitoring

no code implementations20 Jul 2023 Andres Carranza, Dhruv Pai, Rylan Schaeffer, Arnuv Tandon, Sanmi Koyejo

As the capabilities of large machine learning models continue to grow, and as the autonomy afforded to such models continues to expand, the spectre of a new adversary looms: the models themselves.

Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting

no code implementations20 Jul 2023 Rylan Schaeffer, Kateryna Pistunova, Samar Khanna, Sarthak Consul, Sanmi Koyejo

We find that the logically \textit{invalid} reasoning prompts do indeed achieve similar performance gains on BBH tasks as logically valid reasoning prompts.

Language Modelling valid

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

no code implementations NeurIPS 2023 Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li

Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly.

Adversarial Robustness Ethics +1

Are Emergent Abilities of Large Language Models a Mirage?

no code implementations NeurIPS 2023 Rylan Schaeffer, Brando Miranda, Sanmi Koyejo

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models.

Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle

1 code implementation24 Mar 2023 Rylan Schaeffer, Mikail Khona, Zachary Robertson, Akhilan Boopathy, Kateryna Pistunova, Jason W. Rocks, Ila Rani Fiete, Oluwasanmi Koyejo

Double descent is a surprising phenomenon in machine learning, in which as the number of model parameters grows relative to the number of data, test error drops as models grow ever larger into the highly overparameterized (data undersampled) regime.

Learning Theory regression

Streaming Inference for Infinite Non-Stationary Clustering

no code implementations2 May 2022 Rylan Schaeffer, Gabrielle Kaili-May Liu, Yilun Du, Scott Linderman, Ila Rani Fiete

Learning from a continuous stream of non-stationary data in an unsupervised manner is arguably one of the most common and most challenging settings facing intelligent agents.

Clustering Variational Inference

An Algorithmic Theory of Metacognition in Minds and Machines

no code implementations5 Nov 2021 Rylan Schaeffer

To the machine learning community, our proposed theory creates a novel interaction between the Actor and Critic in Actor-Critic agents and notes a novel connection between RL and Bayesian Optimization.

Bayesian Optimization Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.