no code implementations • 12 Nov 2024 • Mohak Shah
Rapid developments in AI and its adoption across various domains have necessitated a need to build robust guardrails and risk containment plans while ensuring equitable benefits for the betterment of society.
no code implementations • 11 Oct 2021 • Viswanath Ganapathy, Sauptik Dhar, Olimpiya Saha, Pelin Kurt Garberson, Javad Heydari, Mohak Shah
In recent times, advances in artificial intelligence (AI) and IoT have enabled seamless and viable maintenance of appliances in home and building environments.
no code implementations • 18 Jun 2021 • Sauptik Dhar, Javad Heydari, Samarth Tripathi, Unmesh Kurup, Mohak Shah
Limited availability of labeled-data makes any supervised learning problem challenging.
no code implementations • CVPR 2021 • Shengdong Zhang, Ehsan Nezhadarya, Homa Fashandi, Jiayi Liu, Darin Graham, Mohak Shah
BN uses scaling and shifting to normalize activations of mini-batches to accelerate convergence and improve generalization.
1 code implementation • 27 Jul 2020 • Sauptik Dhar, Unmesh Kurup, Mohak Shah
This research proposes to use the Moreau-Yosida envelope to stabilize the convergence behavior of bi-level Hyperparameter optimization solvers, and introduces the new algorithm called Moreau-Yosida regularized Hyperparameter Optimization (MY-HPO) algorithm.
no code implementations • 8 May 2020 • Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
With the general trend of increasing Convolutional Neural Network (CNN) model sizes, model compression and acceleration techniques have become critical for the deployment of these models on edge devices.
1 code implementation • NeurIPS 2019 • Sauptik Dhar, Vladimir Cherkassky, Mohak Shah
We introduce the notion of learning from contradictions, a. k. a Universum learning, for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).
1 code implementation • 6 Nov 2019 • Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
Tuning machine learning models at scale, especially finding the right hyperparameter values, can be difficult and time-consuming.
no code implementations • 2 Nov 2019 • Sauptik Dhar, Junyao Guo, Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.).
no code implementations • 15 Oct 2019 • Youngsuk Park, Sauptik Dhar, Stephen Boyd, Mohak Shah
Under this metric selection for VM-PG, the theoretical convergence is analyzed.
no code implementations • 14 May 2019 • Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah, Sauptik Dhar
In this paper, we explore techniques centered around periodic sampling of model weights that provide convergence improvements on gradient update methods (vanilla \acs{SGD}, Momentum, Adam) for a variety of vision problems (classification, detection, segmentation).
no code implementations • 27 Nov 2018 • Junyao Guo, Unmesh Kurup, Mohak Shah
Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments.
1 code implementation • 23 Aug 2018 • Sauptik Dhar, Vladimir Cherkassky, Mohak Shah
We introduce Universum learning for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).
no code implementations • 2 Jul 2018 • Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
We perform a variety of analysis using the MNIST dataset and validate the approach with a number of DNN models using pre-trained models on the ImageNet dataset.
no code implementations • 25 Jan 2018 • Jayanta K. Dutta, Jiayi Liu, Unmesh Kurup, Mohak Shah
We apply this technique to generate models for multiple image datasets and show that these models achieve performance comparable to state-of-the-art (and even surpassing the state-of-the-art in one case).
no code implementations • 25 Jul 2017 • Shujian Yu, Zubin Abraham, Heng Wang, Mohak Shah, Yantao Wei, José C. Príncipe
A fundamental issue for statistical classification models in a streaming environment is that the joint distribution between predictor and response variables changes over time (a phenomenon also known as concept drifts), such that their classification performance deteriorates dramatically.
no code implementations • 5 Dec 2016 • Shengdong Zhang, Soheil Bahrampour, Naveen Ramakrishnan, Mohak Shah
In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables.
no code implementations • 29 Sep 2016 • Sauptik Dhar, Naveen Ramakrishnan, Vladimir Cherkassky, Mohak Shah
We introduce Universum learning for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).
no code implementations • 19 Nov 2015 • Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah
The study is performed on several types of deep learning architectures and we evaluate the performance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings.
no code implementations • NeurIPS 2008 • Mohak Shah
By extending the recently proposed Occamâs Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also recover the corresponding classical PAC-Bayes bound.