1 code implementation • ICLR 2022 • Honglin Yuan, Warren Morningstar, Lin Ning, Karan Singhal
Thus generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap).
no code implementations • 24 Oct 2020 • Joshua Yao-Yu Lin, Hang Yu, Warren Morningstar, Jian Peng, Gilbert Holder
Dark matter substructures are interesting since they can reveal the properties of dark matter.
Cosmology and Nongalactic Astrophysics Computational Physics
no code implementations • 18 Nov 2022 • Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.
no code implementations • 2 Dec 2023 • Neha Kalibhat, Warren Morningstar, Alex Bijamov, Luyang Liu, Karan Singhal, Philip Mansfield
We define augmentations in frequency space called Fourier Domain Augmentations (FDA) and show that training SSL models on a combination of these and image augmentations can improve the downstream classification accuracy by up to 1. 3% on ImageNet-1K.
no code implementations • 8 Mar 2024 • Warren Morningstar, Alex Bijamov, Chris Duvarney, Luke Friedman, Neha Kalibhat, Luyang Liu, Philip Mansfield, Renan Rojas-Gomez, Karan Singhal, Bradley Green, Sushant Prakash
We study the relative effects of data augmentations, pretraining algorithms, and model architectures in Self-Supervised Learning (SSL).