Search Results for author: Collin Burns

Found 12 papers, 8 papers with code

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision

no code implementations14 Dec 2023 Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, Jeff Wu

Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior - for example, to evaluate whether a model faithfully followed instructions or generated safe outputs.

Discovering Latent Knowledge in Language Models Without Supervision

1 code implementation7 Dec 2022 Collin Burns, Haotian Ye, Dan Klein, Jacob Steinhardt

Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect.

Imitation Learning Language Modelling +2

Measuring Coding Challenge Competence With APPS

3 code implementations20 May 2021 Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, Jacob Steinhardt

Recent models such as GPT-Neo can pass approximately 20% of the test cases of introductory problems, so we find that machine learning models are now beginning to learn how to code.

BIG-bench Machine Learning Code Generation

Limitations of Post-Hoc Feature Alignment for Robustness

1 code implementation CVPR 2021 Collin Burns, Jacob Steinhardt

Feature alignment is an approach to improving robustness to distribution shift that matches the distribution of feature activations between the training distribution and test distribution.

Unsupervised Domain Adaptation

CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review

2 code implementations10 Mar 2021 Dan Hendrycks, Collin Burns, Anya Chen, Spencer Ball

We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review.

Measuring Mathematical Problem Solving With the MATH Dataset

4 code implementations5 Mar 2021 Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt

To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics.

Math Math Word Problem Solving +1

How Multipurpose Are Language Models?

no code implementations ICLR 2021 Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt

By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.

Elementary Mathematics Test +1

Measuring Massive Multitask Language Understanding

10 code implementations7 Sep 2020 Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt

By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.

Elementary Mathematics Multi-task Language Understanding +2

Streaming Complexity of SVMs

no code implementations7 Jul 2020 Alexandr Andoni, Collin Burns, Yi Li, Sepideh Mahabadi, David P. Woodruff

We show that, for both problems, for dimensions $d=1, 2$, one can obtain streaming algorithms with space polynomially smaller than $\frac{1}{\lambda\epsilon}$, which is the complexity of SGD for strongly convex functions like the bias-regularized SVM, and which is known to be tight in general, even for $d=1$.

Interpreting Black Box Models via Hypothesis Testing

1 code implementation29 Mar 2019 Collin Burns, Jesse Thomason, Wesley Tansey

In science and medicine, model interpretations may be reported as discoveries of natural phenomena or used to guide patient treatments.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.