Search Results for author: Unmesh Kurup

Found 11 papers, 2 papers with code

No Shifted Augmentations (NSA): compact distributions for robust self-supervised Anomaly Detection

no code implementations19 Mar 2022 Mohamed Yousef, Marcel Ackermann, Unmesh Kurup, Tom Bishop

We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned.

Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +2

No Shifted Augmentations (NSA): strong baselines for self-supervised Anomaly Detection

no code implementations29 Sep 2021 Mohamed Yousef, Tom Bishop, Unmesh Kurup

We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact ID distributions to be learned.

Out of Distribution (OOD) Detection Self-Supervised Anomaly Detection +3

Stabilizing Bi-Level Hyperparameter Optimization using Moreau-Yosida Regularization

1 code implementation27 Jul 2020 Sauptik Dhar, Unmesh Kurup, Mohak Shah

This research proposes to use the Moreau-Yosida envelope to stabilize the convergence behavior of bi-level Hyperparameter optimization solvers, and introduces the new algorithm called Moreau-Yosida regularized Hyperparameter Optimization (MY-HPO) algorithm.

Hyperparameter Optimization

Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey

no code implementations8 May 2020 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

With the general trend of increasing Convolutional Neural Network (CNN) model sizes, model compression and acceleration techniques have become critical for the deployment of these models on edge devices.

Model Compression

Auptimizer -- an Extensible, Open-Source Framework for Hyperparameter Tuning

1 code implementation6 Nov 2019 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

Tuning machine learning models at scale, especially finding the right hyperparameter values, can be difficult and time-consuming.

BIG-bench Machine Learning Hyperparameter Optimization +2

On-Device Machine Learning: An Algorithms and Learning Theory Perspective

no code implementations2 Nov 2019 Sauptik Dhar, Junyao Guo, Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.).

BIG-bench Machine Learning Few-Shot Learning +1

Improving Model Training by Periodic Sampling over Weight Distributions

no code implementations14 May 2019 Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah, Sauptik Dhar

In this paper, we explore techniques centered around periodic sampling of model weights that provide convergence improvements on gradient update methods (vanilla \acs{SGD}, Momentum, Adam) for a variety of vision problems (classification, detection, segmentation).

Is it Safe to Drive? An Overview of Factors, Challenges, and Datasets for Driveability Assessment in Autonomous Driving

no code implementations27 Nov 2018 Junyao Guo, Unmesh Kurup, Mohak Shah

Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments.

Autonomous Driving

Make (Nearly) Every Neural Network Better: Generating Neural Network Ensembles by Weight Parameter Resampling

no code implementations2 Jul 2018 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

We perform a variety of analysis using the MNIST dataset and validate the approach with a number of DNN models using pre-trained models on the ImageNet dataset.

Effective Building Block Design for Deep Convolutional Neural Networks using Search

no code implementations25 Jan 2018 Jayanta K. Dutta, Jiayi Liu, Unmesh Kurup, Mohak Shah

We apply this technique to generate models for multiple image datasets and show that these models achieve performance comparable to state-of-the-art (and even surpassing the state-of-the-art in one case).

Cannot find the paper you are looking for? You can Submit a new open access paper.