Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks.
Our decomposition consists of four error components: approximation, representation usability, probe generalization, and encoder generalization.
For non-contrastive learning, we use our framework to derive a simple and novel objective.
The development of CLIP [Radford et al., 2021] has sparked a debate on whether language supervision can result in vision models with more transferable representations than traditional image-only methods.
We introduce InstaAug, a method for automatically learning input-specific augmentations from data.
Machine learning systems often experience a distribution shift between training and testing.
Ranked #37 on Image Classification on ObjectNet (using extra training data)
Most data is automatically collected and only ever "seen" by algorithms.
Ranked #1 on Image Compression on ImageNet (using extra training data)
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data.
We hypothesize that models with a separate content- and location-based attention are more likely to extrapolate than those with common attention mechanisms.
We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.