Search Results for author: Mohak Shah

Found 19 papers, 4 papers with code

A Survey on Proactive Customer Care: Enabling Science and Steps to Realize it

no code implementations11 Oct 2021 Viswanath Ganapathy, Sauptik Dhar, Olimpiya Saha, Pelin Kurt Garberson, Javad Heydari, Mohak Shah

In recent times, advances in artificial intelligence (AI) and IoT have enabled seamless and viable maintenance of appliances in home and building environments.

Anomaly Detection

Stochastic Whitening Batch Normalization

no code implementations CVPR 2021 Shengdong Zhang, Ehsan Nezhadarya, Homa Fashandi, Jiayi Liu, Darin Graham, Mohak Shah

BN uses scaling and shifting to normalize activations of mini-batches to accelerate convergence and improve generalization.

Image Classification

Stabilizing Bi-Level Hyperparameter Optimization using Moreau-Yosida Regularization

1 code implementation27 Jul 2020 Sauptik Dhar, Unmesh Kurup, Mohak Shah

This research proposes to use the Moreau-Yosida envelope to stabilize the convergence behavior of bi-level Hyperparameter optimization solvers, and introduces the new algorithm called Moreau-Yosida regularized Hyperparameter Optimization (MY-HPO) algorithm.

Hyperparameter Optimization

Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey

no code implementations8 May 2020 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

With the general trend of increasing Convolutional Neural Network (CNN) model sizes, model compression and acceleration techniques have become critical for the deployment of these models on edge devices.

Model Compression

Multiclass Learning from Contradictions

1 code implementation NeurIPS 2019 Sauptik Dhar, Vladimir Cherkassky, Mohak Shah

We introduce the notion of learning from contradictions, a. k. a Universum learning, for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).

Model Selection

Auptimizer -- an Extensible, Open-Source Framework for Hyperparameter Tuning

1 code implementation6 Nov 2019 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

Tuning machine learning models at scale, especially finding the right hyperparameter values, can be difficult and time-consuming.

BIG-bench Machine Learning Hyperparameter Optimization +2

On-Device Machine Learning: An Algorithms and Learning Theory Perspective

no code implementations2 Nov 2019 Sauptik Dhar, Junyao Guo, Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.).

BIG-bench Machine Learning Few-Shot Learning +1

Improving Model Training by Periodic Sampling over Weight Distributions

no code implementations14 May 2019 Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah, Sauptik Dhar

In this paper, we explore techniques centered around periodic sampling of model weights that provide convergence improvements on gradient update methods (vanilla \acs{SGD}, Momentum, Adam) for a variety of vision problems (classification, detection, segmentation).

Is it Safe to Drive? An Overview of Factors, Challenges, and Datasets for Driveability Assessment in Autonomous Driving

no code implementations27 Nov 2018 Junyao Guo, Unmesh Kurup, Mohak Shah

Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments.

Autonomous Driving

Multiclass Universum SVM

1 code implementation23 Aug 2018 Sauptik Dhar, Vladimir Cherkassky, Mohak Shah

We introduce Universum learning for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).

Model Selection

Make (Nearly) Every Neural Network Better: Generating Neural Network Ensembles by Weight Parameter Resampling

no code implementations2 Jul 2018 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

We perform a variety of analysis using the MNIST dataset and validate the approach with a number of DNN models using pre-trained models on the ImageNet dataset.

Effective Building Block Design for Deep Convolutional Neural Networks using Search

no code implementations25 Jan 2018 Jayanta K. Dutta, Jiayi Liu, Unmesh Kurup, Mohak Shah

We apply this technique to generate models for multiple image datasets and show that these models achieve performance comparable to state-of-the-art (and even surpassing the state-of-the-art in one case).

Concept Drift Detection and Adaptation with Hierarchical Hypothesis Testing

no code implementations25 Jul 2017 Shujian Yu, Zubin Abraham, Heng Wang, Mohak Shah, Yantao Wei, José C. Príncipe

A fundamental issue for statistical classification models in a streaming environment is that the joint distribution between predictor and response variables changes over time (a phenomenon also known as concept drifts), such that their classification performance deteriorates dramatically.

General Classification Two-sample testing

Deep Symbolic Representation Learning for Heterogeneous Time-series Classification

no code implementations5 Dec 2016 Shengdong Zhang, Soheil Bahrampour, Naveen Ramakrishnan, Mohak Shah

In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables.

Classification General Classification +4

Universum Learning for Multiclass SVM

no code implementations29 Sep 2016 Sauptik Dhar, Naveen Ramakrishnan, Vladimir Cherkassky, Mohak Shah

We introduce Universum learning for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).

Model Selection

Comparative Study of Deep Learning Software Frameworks

no code implementations19 Nov 2015 Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah

The study is performed on several types of deep learning architectures and we evaluate the performance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings.

Risk Bounds for Randomized Sample Compressed Classifiers

no code implementations NeurIPS 2008 Mohak Shah

By extending the recently proposed Occam’s Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also recover the corresponding classical PAC-Bayes bound.

Cannot find the paper you are looking for? You can Submit a new open access paper.