no code implementations • 19 Mar 2022 • Mohamed Yousef, Marcel Ackermann, Unmesh Kurup, Tom Bishop
We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned.
Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +2
no code implementations • 29 Sep 2021 • Mohamed Yousef, Tom Bishop, Unmesh Kurup
We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact ID distributions to be learned.
Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +3
no code implementations • 18 Jun 2021 • Sauptik Dhar, Javad Heydari, Samarth Tripathi, Unmesh Kurup, Mohak Shah
Limited availability of labeled-data makes any supervised learning problem challenging.
1 code implementation • 27 Jul 2020 • Sauptik Dhar, Unmesh Kurup, Mohak Shah
This research proposes to use the Moreau-Yosida envelope to stabilize the convergence behavior of bi-level Hyperparameter optimization solvers, and introduces the new algorithm called Moreau-Yosida regularized Hyperparameter Optimization (MY-HPO) algorithm.
no code implementations • 8 May 2020 • Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
With the general trend of increasing Convolutional Neural Network (CNN) model sizes, model compression and acceleration techniques have become critical for the deployment of these models on edge devices.
1 code implementation • 6 Nov 2019 • Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
Tuning machine learning models at scale, especially finding the right hyperparameter values, can be difficult and time-consuming.
no code implementations • 2 Nov 2019 • Sauptik Dhar, Junyao Guo, Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.).
no code implementations • 14 May 2019 • Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah, Sauptik Dhar
In this paper, we explore techniques centered around periodic sampling of model weights that provide convergence improvements on gradient update methods (vanilla \acs{SGD}, Momentum, Adam) for a variety of vision problems (classification, detection, segmentation).
no code implementations • 27 Nov 2018 • Junyao Guo, Unmesh Kurup, Mohak Shah
Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments.
no code implementations • 2 Jul 2018 • Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
We perform a variety of analysis using the MNIST dataset and validate the approach with a number of DNN models using pre-trained models on the ImageNet dataset.
no code implementations • 25 Jan 2018 • Jayanta K. Dutta, Jiayi Liu, Unmesh Kurup, Mohak Shah
We apply this technique to generate models for multiple image datasets and show that these models achieve performance comparable to state-of-the-art (and even surpassing the state-of-the-art in one case).