Search Results for author: Alex Tamkin

Found 13 papers, 5 papers with code

Active Learning Helps Pretrained Models Learn the Intended Task

no code implementations18 Apr 2022 Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman

Models can fail in unpredictable ways during deployment due to task ambiguity, when multiple behaviors are consistent with the provided training data.

Active Learning

Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies

no code implementations24 Feb 2022 Zhengxuan Wu, Isabel Papadimitriou, Alex Tamkin

Little is known about what makes cross-lingual transfer hard, since factors like tokenization, morphology, and syntax all change at once between languages.

Cross-Lingual Transfer Transfer Learning

Tradeoffs Between Contrastive and Supervised Learning: An Empirical Study

no code implementations10 Dec 2021 Ananya Karthik, Mike Wu, Noah Goodman, Alex Tamkin

Contrastive learning has made considerable progress in computer vision, outperforming supervised pretraining on a range of downstream datasets.

Contrastive Learning Image Classification

DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning

1 code implementation23 Nov 2021 Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah Goodman

Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant strides in fields like natural language processing, computer vision, and speech processing.

Self-Supervised Learning

Pretrained models are active learners

no code implementations29 Sep 2021 Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman

An important barrier to the safe deployment of machine learning systems is the risk of \emph{task ambiguity}, where multiple behaviors are consistent with the provided examples.

Active Learning

C5T5: Controllable Generation of Organic Molecules with Transformers

1 code implementation23 Aug 2021 Daniel Rothchild, Alex Tamkin, Julie Yu, Ujval Misra, Joseph Gonzalez

Methods for designing organic materials with desired properties have high potential impact across fields such as medicine, renewable energy, petrochemical engineering, and agriculture.

Drug Discovery

On the Opportunities and Risks of Foundation Models

no code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models

no code implementations4 Feb 2021 Alex Tamkin, Miles Brundage, Jack Clark, Deep Ganguli

On October 14th, 2020, researchers from OpenAI, the Stanford Institute for Human-Centered Artificial Intelligence, and other universities convened to discuss open research questions surrounding GPT-3, the largest publicly-disclosed dense language model at the time.

Language Modelling

Viewmaker Networks: Learning Views for Unsupervised Representation Learning

1 code implementation ICLR 2021 Alex Tamkin, Mike Wu, Noah Goodman

However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities.

Contrastive Learning Representation Learning

Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy

no code implementations5 Nov 2019 Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill

While maximizing expected return is the goal in most reinforcement learning approaches, risk-sensitive objectives such as conditional value at risk (CVaR) are more suitable for many high-stakes applications.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.