no code implementations • 1 Apr 2024 • Kartik Gupta, Rahul Vippala, Sahima Srivastava
In this project we explore two things: classification performance of these attention based networks on ModelNet10 dataset and then, we use the trained model to classify 3D MNIST dataset after finetuning.
1 code implementation • 14 Mar 2024 • Qinyu Zhao, Ming Xu, Kartik Gupta, Akshay Asthana, Liang Zheng, Stephen Gould
This study uses linear probing to shed light on the hidden knowledge at the output layer of LVLMs.
1 code implementation • 1 Feb 2024 • Qinyu Zhao, Ming Xu, Kartik Gupta, Akshay Asthana, Liang Zheng, Stephen Gould
Feature shaping refers to a family of methods that exhibit state-of-the-art performance for out-of-distribution (OOD) detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 9 Nov 2023 • Kartik Gupta, Akshay Asthana
While quantization-aware training QAT is the well-studied approach to quantize the networks at low precision, most research focuses on over-parameterized networks for classification with limited studies on popular and edge device friendly single-shot object detection and semantic segmentation methods like YOLO.
no code implementations • 22 Dec 2022 • Kartik Gupta, Thalaiyasingam Ajanthan, Anton Van Den Hengel, Stephen Gould
Most current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective and then discard the learned projection head after training.
no code implementations • 10 Nov 2022 • Nikhil Bansal, Kartik Gupta, Kiruthika Kannan, Sivani Pentapati, Ravi Kiran Sarvadevabhatla
Pictionary, the popular sketch-based guessing game, provides an opportunity to analyze shared goal cooperative game play in restricted communication settings.
no code implementations • 22 Jun 2022 • Kartik Gupta, Marios Fournarakis, Matthias Reisser, Christos Louizos, Markus Nagel
We perform extensive experiments on standard FL benchmarks to evaluate our proposed FedAvg variants for quantization robustness and provide a convergence analysis for our Quantization-Aware variants in FL.
1 code implementation • ICLR 2021 • Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, Richard Hartley
From this, by approximating the empirical cumulative distribution using a differentiable function via splines, we obtain a recalibration function, which maps the network outputs to actual (calibrated) class assignment probabilities.
no code implementations • 23 Jun 2020 • Amir Rahimi, Thomas Mensink, Kartik Gupta, Thalaiyasingam Ajanthan, Cristian Sminchisescu, Richard Hartley
Calibration of neural networks is a critical aspect to consider when incorporating machine learning models in real-world decision-making systems where the confidence of decisions are equally important as the decisions themselves.
no code implementations • 19 Jun 2020 • Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar
We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game.
1 code implementation • 30 Mar 2020 • Kartik Gupta, Thalaiyasingam Ajanthan
In this work, we systematically study the robustness of quantized networks against gradient based adversarial attacks and demonstrate that these quantized models suffer from gradient vanishing issues and show a fake sense of robustness.
1 code implementation • 18 Oct 2019 • Thalaiyasingam Ajanthan, Kartik Gupta, Philip H. S. Torr, Richard Hartley, Puneet K. Dokania
Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity.
1 code implementation • 30 Sep 2019 • Kartik Gupta, Lars Petersson, Richard Hartley
We present a new approach for a single view, image-based object pose estimation.
Ranked #13 on 6D Pose Estimation using RGB on Occlusion LineMOD
no code implementations • 20 Jun 2018 • Kartik Gupta, Darius Burschka, Arnav Bhavsar
Due to the variations in geometrical and motion constraints, there are different manipulations actions possible to perform different sets of actions with an object.
no code implementations • ICML 2017 • Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain
Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time.
no code implementations • 23 Jun 2016 • Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain
Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time.