no code implementations • 15 Feb 2024 • Muthu Chidambaram, Holden Lee, Colin McSwiggen, Semon Rezchikov
Informally, a model is calibrated if its predictions are correct with a probability that matches the confidence of the prediction.
no code implementations • 10 Feb 2024 • Muthu Chidambaram, Rong Ge
Data augmentation has been pivotal in successfully training deep learning models on classification tasks over the past decade.
1 code implementation • 1 Jun 2023 • Muthu Chidambaram, Rong Ge
Despite the impressive generalization capabilities of deep neural networks, they have been repeatedly shown to be overconfident when they are wrong.
1 code implementation • 24 Feb 2023 • Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge
Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective for which recovering the ground-truth dictionary is in fact optimal as the signal increases for a large class of data-generating processes.
1 code implementation • 24 Oct 2022 • Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge
Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels.
1 code implementation • ICLR 2022 • Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training.