1 code implementation • 25 Sep 2023 • Yangjun Ruan, Honghua Dong, Andrew Wang, Silviu Pitis, Yongchao Zhou, Jimmy Ba, Yann Dubois, Chris J. Maddison, Tatsunori Hashimoto
Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks.
1 code implementation • 6 Feb 2023 • Yann Dubois, Tatsunori Hashimoto, Percy Liang
Our decomposition consists of four error components: approximation, representation usability, probe generalization, and encoder generalization.
1 code implementation • 13 Sep 2022 • Yann Dubois, Tatsunori Hashimoto, Stefano Ermon, Percy Liang
For non-contrastive learning, we use our framework to derive a simple and novel objective.
no code implementations • 15 Jul 2022 • Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, Tatsunori Hashimoto
The development of CLIP [Radford et al., 2021] has sparked a debate on whether language supervision can result in vision models with more transferable representations than traditional image-only methods.
1 code implementation • 31 May 2022 • Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim
We introduce InstaAug, a method for automatically learning input-specific augmentations from data.
2 code implementations • ICLR 2022 • Yangjun Ruan, Yann Dubois, Chris J. Maddison
Machine learning systems often experience a distribution shift between training and testing.
Ranked #37 on
Image Classification
on ObjectNet
(using extra training data)
1 code implementation • NeurIPS 2021 • Yann Dubois, Benjamin Bloem-Reddy, Karen Ullrich, Chris J. Maddison
Most data is automatically collected and only ever "seen" by algorithms.
Ranked #1 on
Image Compression
on ImageNet
(using extra training data)
1 code implementation • NeurIPS 2020 • Yann Dubois, Douwe Kiela, David J. Schwab, Ramakrishna Vedantam
We address the question of characterizing and finding optimal representations for supervised learning.
2 code implementations • NeurIPS 2020 • Andrew Y. K. Foong, Wessel P. Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, Richard E. Turner
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data.
no code implementations • ACL 2020 • Yann Dubois, Gautier Dagan, Dieuwke Hupkes, Elia Bruni
We hypothesize that models with a separate content- and location-based attention are more likely to extrapolate than those with common attention mechanisms.
3 code implementations • ICLR 2020 • Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner
We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.