Search Results for author: Chaithanya Kumar Mummadi

Found 17 papers, 4 papers with code

Defending Against Universal Perturbations With Shared Adversarial Training

no code implementations ICCV 2019 Chaithanya Kumar Mummadi, Thomas Brox, Jan Hendrik Metzen

Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space.

Image Classification Semantic Segmentation

Group Pruning using a Bounded-Lp norm for Group Gating and Regularization

no code implementations9 Aug 2019 Chaithanya Kumar Mummadi, Tim Genewein, Dan Zhang, Thomas Brox, Volker Fischer

We achieve state-of-the-art pruning results for ResNet-50 with higher accuracy on ImageNet.

Does enhanced shape bias improve neural network robustness to common corruptions?

no code implementations ICLR 2021 Chaithanya Kumar Mummadi, Ranjitha Subramaniam, Robin Hutmacher, Julien Vitay, Volker Fischer, Jan Hendrik Metzen

We conclude that the data augmentation caused by style-variation accounts for the improved corruption robustness and increased shape bias is only a byproduct.

Data Augmentation

Overcoming Shortcut Learning in a Target Domain by Generalizing Basic Visual Factors from a Source Domain

1 code implementation20 Jul 2022 Piyapat Saranrittichai, Chaithanya Kumar Mummadi, Claudia Blaiotta, Mauricio Munoz, Volker Fischer

Our approach extends the training set with an additional dataset (the source domain), which is specifically designed to facilitate learning independent representations of basic visual factors.

Multi-Attribute Open Set Recognition

1 code implementation14 Aug 2022 Piyapat Saranrittichai, Chaithanya Kumar Mummadi, Claudia Blaiotta, Mauricio Munoz, Volker Fischer

While conventional OSR approaches can detect Out-of-Distribution (OOD) samples, they cannot provide explanations indicating which underlying visual attribute(s) (e. g., shape, color or background) cause a specific sample to be unknown.

Attribute Image Classification +1

PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts

1 code implementation2 Aug 2023 Bang An, Sicheng Zhu, Michael-Andrei Panaitescu-Liess, Chaithanya Kumar Mummadi, Furong Huang

Inspired by it, we observe that providing CLIP with contextual attributes improves zero-shot image classification and mitigates reliance on spurious features.

Classification Image Classification +4

Zero-Shot Visual Classification with Guided Cropping

no code implementations12 Sep 2023 Piyapat Saranrittichai, Mauricio Munoz, Volker Fischer, Chaithanya Kumar Mummadi

We empirically show that our approach improves zero-shot classification results across architectures and datasets, favorably for small objects.

Classification Object +3

AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models

no code implementations28 Sep 2023 Jan Hendrik Metzen, Piyapat Saranrittichai, Chaithanya Kumar Mummadi

We show that AutoCLIP outperforms baselines across a broad range of vision-language models, datasets, and prompt templates consistently and by up to 3 percent point accuracy.

Image Classification Language Modelling +1

Text-driven Prompt Generation for Vision-Language Models in Federated Learning

no code implementations9 Oct 2023 Chen Qiu, Xingyu Li, Chaithanya Kumar Mummadi, Madan Ravi Ganesh, Zhenzhen Li, Lu Peng, Wan-Yi Lin

Prompt learning for vision-language models, e. g., CoOp, has shown great success in adapting CLIP to different downstream tasks, making it a promising solution for federated learning due to computational reasons.

Federated Learning Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.