Search Results for author: Muthu Chidambaram

Found 6 papers, 4 papers with code

How Flawed is ECE? An Analysis via Logit Smoothing

no code implementations15 Feb 2024 Muthu Chidambaram, Holden Lee, Colin McSwiggen, Semon Rezchikov

Informally, a model is calibrated if its predictions are correct with a probability that matches the confidence of the prediction.

Image Classification

For Better or For Worse? Learning Minimum Variance Features With Label Augmentation

no code implementations10 Feb 2024 Muthu Chidambaram, Rong Ge

Data augmentation has been pivotal in successfully training deep learning models on classification tasks over the past decade.

Data Augmentation Image Classification

On the Limitations of Temperature Scaling for Distributions with Overlaps

1 code implementation1 Jun 2023 Muthu Chidambaram, Rong Ge

Despite the impressive generalization capabilities of deep neural networks, they have been repeatedly shown to be overconfident when they are wrong.

Data Augmentation Image Classification

Hiding Data Helps: On the Benefits of Masking for Sparse Coding

1 code implementation24 Feb 2023 Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge

Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective for which recovering the ground-truth dictionary is in fact optimal as the signal increases for a large class of data-generating processes.

Dictionary Learning Self-Supervised Learning

Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup

1 code implementation24 Oct 2022 Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge

Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels.

Data Augmentation Image Classification

Towards Understanding the Data Dependency of Mixup-style Training

1 code implementation ICLR 2022 Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge

Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training.

Cannot find the paper you are looking for? You can Submit a new open access paper.