Search Results for author: Anish Madan

Found 7 papers, 5 papers with code

Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models

1 code implementation27 May 2025 Peter Robicheaux, Matvei Popov, Anish Madan, Isaac Robinson, Joseph Nelson, Deva Ramanan, Neehar Peri

Vision-language models (VLMs) trained on internet-scale data achieve remarkable zero-shot detection performance on common objects like car, truck, and pedestrian.

Concept Alignment object-detection +1

SMORE: Simultaneous Map and Object REconstruction

no code implementations19 Jun 2024 Nathaniel Chodosh, Anish Madan, Simon Lucey, Deva Ramanan

We take a holistic perspective and optimize a compositional model of a dynamic scene that decomposes the world into rigidly-moving objects and the background.

Depth Completion Dynamic Reconstruction +6

Revisiting Few-Shot Object Detection with Vision-Language Models

1 code implementation22 Dec 2023 Anish Madan, Neehar Peri, Shu Kong, Deva Ramanan

Concretely, we propose Foundational FSOD, a new benchmark protocol that evaluates detectors pre-trained on any external data and fine-tuned on multi-modal (text and visual) K-shot examples per target class.

Autonomous Vehicles Few-Shot Object Detection +3

B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic Meta-Learning

1 code implementation1 Jan 2021 Anish Madan, Ranjitha Prasad

We demonstrate the performance of B-MAML using classification and regression tasks, and highlight that training a sparsifying BNN using MAML indeed improves the parameter footprint of the model while performing at par or even outperforming the MAML approach.

Domain Adaptation Meta-Learning +1

REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust Predictions

1 code implementation18 Jun 2020 Lokender Tiwari, Anish Madan, Saket Anand, Subhashis Banerjee

Specifically, we devise an ensemble of these generative classifiers that rank-aggregates their predictions via a Borda count-based consensus.

Adversarial Attack

C3VQG: Category Consistent Cyclic Visual Question Generation

1 code implementation15 May 2020 Shagun Uppal, Anish Madan, Sarthak Bhagat, Yi Yu, Rajiv Ratn Shah

In this paper, we try to exploit the different visual cues and concepts in an image to generate questions using a variational autoencoder (VAE) without ground-truth answers.

Natural Questions Question Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.