no code implementations • 16 Jan 2025 • Pratyush Dhingra, Janardhan Rao Doppa, Partha Pratim Pande
Experimental results demonstrate that Atleus outperforms existing state-of-the-art by up to 56x and 64. 5x in terms of performance and energy efficiency respectively
1 code implementation • 25 Dec 2024 • Yassine Chemingui, Aryan Deshwal, Honghao Wei, Alan Fern, Janardhan Rao Doppa
Offline safe reinforcement learning (OSRL) involves learning a decision-making policy to maximize rewards from a fixed batch of training data to satisfy pre-defined safety constraints.
1 code implementation • 11 Dec 2024 • Syrine Belakaria, Alaleh Ahmadianshalchi, Barbara Engelhardt, Stefano Ermon, Janardhan Rao Doppa
We address this challenge by using hypervolume improvement (HVI) as our scalarization approach, which allows us to use a lower-bound on the Bellman equation to approximate the finite-horizon using a batch expected hypervolume improvement (EHVI) acquisition function (AF) for MOO.
no code implementations • 6 Aug 2024 • Pratyush Dhingra, Janardhan Rao Doppa, Partha Pratim Pande
Transformers have revolutionized deep learning and generative modeling to enable unprecedented advancements in natural language processing tasks and beyond.
1 code implementation • 13 Jul 2024 • Syrine Belakaria, Benjamin Letham, Janardhan Rao Doppa, Barbara Engelhardt, Stefano Ermon, Eytan Bakshy
We consider the problem of active learning for global sensitivity analysis of expensive black-box functions.
1 code implementation • 13 Jun 2024 • Alaleh Ahmadianshalchi, Syrine Belakaria, Janardhan Rao Doppa
We solve a cheap MOO problem by assigning the selected acquisition function for each expensive objective function to obtain a candidate set of inputs for evaluation.
1 code implementation • 10 Jun 2024 • Yuanjie Shi, Subhankar Ghosh, Taha Belkhouja, Janardhan Rao Doppa, Yan Yan
In contrast to the standard class-conditional CP (CCP) method that uniformly thresholds the class-wise conformity score for each class, the augmented label rank calibration step allows RC3P to selectively iterate this class-wise thresholding subroutine only for a subset of classes whose class-wise top-k error is small.
no code implementations • 31 May 2024 • Mohammed Amine Gharsallaoui, Bhupinderjeet Singh, Supriya Savalkar, Aryan Deshwal, Yan Yan, Ananth Kalyanaraman, Kirti Rajagopalan, Janardhan Rao Doppa
Predicting the spatiotemporal variation in streamflow along with uncertainty quantification enables decision-making for sustainable management of scarce water resources.
1 code implementation • 8 May 2024 • Yassine Chemingui, Aryan Deshwal, Trong Nghia Hoang, Janardhan Rao Doppa
Offline optimization is an emerging problem in many experimental engineering domains including protein, drug or aircraft design, where online experimentation to collect evaluation data is too expensive or dangerous.
no code implementations • 28 Mar 2024 • Harsh Sharma, Gaurav Narang, Janardhan Rao Doppa, Umit Ogras, Partha Pratim Pande
However, as the complexity of Deep convolutional neural networks (DNNs) grows, we need to design a manycore architecture with multiple ReRAM-based processing elements (PEs) on a single chip.
no code implementations • 19 Jan 2024 • Pratyush Dhingra, Chukwufumnanya Ogbogu, Biresh Kumar Joardar, Janardhan Rao Doppa, Ananth Kalyanaraman, Partha Pratim Pande
Experimental results demonstrate that FARe framework can restore GNN test accuracy by 47. 6% on faulty ReRAM hardware with a ~1% timing overhead compared to the fault-free counterpart.
no code implementations • 23 Mar 2023 • Alaleh Ahmadianshalchi, Syrine Belakaria, Janardhan Rao Doppa
Our overall goal is to approximate the optimal Pareto set over the small fraction of feasible input designs.
1 code implementation • 19 Mar 2023 • Subhankar Ghosh, Taha Belkhouja, Yan Yan, Janardhan Rao Doppa
Safe deployment of deep neural networks in high-stake real-world applications requires theoretically sound uncertainty quantification.
1 code implementation • 3 Mar 2023 • Aryan Deshwal, Sebastian Ament, Maximilian Balandat, Eytan Bakshy, Janardhan Rao Doppa, David Eriksson
We use Bayesian Optimization (BO) and propose a novel surrogate modeling approach for efficiently handling a large number of binary and categorical parameters.
no code implementations • 19 Jul 2022 • Harsha Kokel, Mayukh Das, Rakibul Islam, Julia Bonn, Jon Cai, Soham Dan, Anjali Narayan-Chen, Prashant Jayannavar, Janardhan Rao Doppa, Julia Hockenmaier, Sriraam Natarajan, Martha Palmer, Dan Roth
We consider the problem of human-machine collaborative problem solving as a planning task coupled with natural language communication.
no code implementations • 10 Jul 2022 • Garrett Wilson, Janardhan Rao Doppa, Diane J. Cook
Because of the large variations present in human behavior, we collect data from many participants across two different age groups.
1 code implementation • 9 Jul 2022 • Taha Belkhouja, Yan Yan, Janardhan Rao Doppa
Despite the success of deep neural networks (DNNs) for real-world applications over time-series data such as mobile health, little is known about how to train robust DNNs for time-series domain due to its unique characteristics compared to images and text data.
1 code implementation • 9 Jul 2022 • Taha Belkhouja, Yan Yan, Janardhan Rao Doppa
Experiments on diverse real-world benchmarks demonstrate that the SRS method is well-suited for time-series OOD detection when compared to baseline methods.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +2
1 code implementation • 9 Jul 2022 • Taha Belkhouja, Janardhan Rao Doppa
We also provide certified bounds on the norm of the statistical features for constructing adversarial examples.
1 code implementation • 9 Jul 2022 • Taha Belkhouja, Yan Yan, Janardhan Rao Doppa
Despite the rapid progress on research in adversarial robustness of deep neural networks (DNNs), there is little principled work for the time-series domain.
no code implementations • 25 Jun 2022 • Syrine Belakaria, Janardhan Rao Doppa, Nicolo Fusi, Rishit Sheth
The rising growth of deep neural networks (DNNs) and datasets in size motivates the need for efficient solutions for simultaneous model selection and training.
1 code implementation • 12 Apr 2022 • Syrine Belakaria, Aryan Deshwal, Nitthilan Kannappan Jayakodi, Janardhan Rao Doppa
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto set of solutions while minimizing the number of function evaluations.
no code implementations • 16 Jan 2022 • Dina Hussein, Ganapati Bhat, Janardhan Rao Doppa
To solve this problem, we propose a principled algorithm referred as AdaEM.
1 code implementation • 2 Dec 2021 • Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa, Dae Hyun Kim
First, BOPS-T employs Gaussian process (GP) surrogate model with Kendall kernels and a Tractable acquisition function optimization approach based on Thompson sampling to select the sequence of permutations for evaluation.
1 code implementation • NeurIPS 2021 • Aryan Deshwal, Janardhan Rao Doppa
The key idea is to define a novel structure-coupled kernel that explicitly integrates the structural information from decoded structures with the learned latent space representation for better surrogate modeling.
4 code implementations • 13 Oct 2021 • Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
We consider the problem of black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments), where the goal is to approximate the true Pareto set of solutions by minimizing the total resource cost of experiments.
1 code implementation • 30 Sep 2021 • Garrett Wilson, Janardhan Rao Doppa, Diane J. Cook
CALDA synergistically combines the principles of contrastive learning and adversarial learning to robustly support multi-source UDA (MS-UDA) for time series data.
1 code implementation • 8 Jun 2021 • Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa
We develop a principled approach for constructing diffusion kernels over hybrid spaces by utilizing the additive kernel formulation, which allows additive interactions of all orders in a tractable manner.
no code implementations • 23 Mar 2021 • Nitthilan Kannappan Jayakodi, Janardhan Rao Doppa, Partha Pratim Pande
The key idea behind SETGAN for an image generation task is for a given input image, we train a GAN on a remote server and use the trained model on edge devices.
1 code implementation • 14 Dec 2020 • Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa
In this paper, we propose an efficient approach referred as Mercer Features for Combinatorial Bayesian Optimization (MerCBO).
no code implementations • 14 Dec 2020 • Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa, Alan Fern
We consider the problem of optimizing expensive black-box functions over discrete spaces (e. g., sets, sequences, graphs).
no code implementations • 2 Nov 2020 • Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
The overall goal is to approximate the true Pareto set of solutions by minimizing the resources consumed for function evaluations.
no code implementations • 12 Sep 2020 • Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
The key idea is to select the sequence of input and function approximations for multiple objectives which maximize the information gain per unit cost for the optimal Pareto front.
1 code implementation • 1 Sep 2020 • Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
We consider the problem of constrained multi-objective blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto set of solutions satisfying a set of constraints while minimizing the number of function evaluations.
no code implementations • 22 Aug 2020 • Sumit K. Mandal, Umit Y. Ogras, Janardhan Rao Doppa, Raid Z. Ayoub, Michael Kishinevsky, Partha P. Pande
Dynamic resource management has become one of the major areas of research in modern computer and communication system design due to lower power consumption and higher performance demands.
1 code implementation • 18 Aug 2020 • Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa
Based on recent advances in submodular relaxation (Ito and Fujimaki, 2016) for solving Binary Quadratic Programs, we study an approach referred as Parametrized Submodular Relaxation (PSR) towards the goal of improving the scalability and accuracy of solving AFO problems for BOCS model.
1 code implementation • 16 Aug 2020 • Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
We consider the problem of constrained multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto set of solutions satisfying a set of constraints while minimizing the number of function evaluations.
3 code implementations • 22 May 2020 • Garrett Wilson, Janardhan Rao Doppa, Diane J. Cook
First, we propose a novel Convolutional deep Domain Adaptation model for Time Series data (CoDATS) that significantly improves accuracy and training time over state-of-the-art DA strategies on real-world sensor data benchmarks.
no code implementations • 20 Mar 2020 • Sumit K. Mandal, Ganapati Bhat, Janardhan Rao Doppa, Partha Pratim Pande, Umit Y. Ogras
To address this need, system-on-chips (SoC) that are at the heart of these devices provide a variety of control knobs, such as the number of active cores and their voltage/frequency levels.
no code implementations • 15 Dec 2019 • Mayukh Das, Nandini Ramanan, Janardhan Rao Doppa, Sriraam Natarajan
First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization.
1 code implementation • NeurIPS 2019 • Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto-set of solutions by minimizing the number of function evaluations.
no code implementations • 29 Jan 2019 • Nitthilan Kannappan Jayakodi, Anwesha Chatterjee, Wonje Choi, Janardhan Rao Doppa, Partha Pratim Pande
Deep neural networks have seen tremendous success for different modalities of data including images, videos, and speech.
2 code implementations • 23 Jan 2019 • Shubhomoy Das, Md. Rakibul Islam, Nitthilan Kannappan Jayakodi, Janardhan Rao Doppa
Our results show that active learning allows us to discover significantly more anomalies than state-of-the-art unsupervised baselines, our batch active learning algorithm discovers diverse anomalies, and our algorithms under the streaming-data setup are competitive with the batch setup.
1 code implementation • 20 Oct 2018 • Biresh Kumar Joardar, Ryan Gary Kim, Janardhan Rao Doppa, Partha Pratim Pande, Diana Marculescu, Radu Marculescu
Our results show that these generalized 3D NoCs only incur a 1. 8% (36-tile system) and 1. 1% (64-tile system) average performance loss compared to application-specific NoCs.
2 code implementations • 2 Oct 2018 • Md. Rakibul Islam, Shubhomoy Das, Janardhan Rao Doppa, Sriraam Natarajan
Human analysts that use anomaly detection systems in practice want to retain the use of simple and explainable global anomaly detectors.
2 code implementations • 17 Sep 2018 • Shubhomoy Das, Md. Rakibul Islam, Nitthilan Kannappan Jayakodi, Janardhan Rao Doppa
First, we present an important insight into how anomaly detector ensembles are naturally suited for active learning.
no code implementations • 11 Sep 2018 • J. Walker Orr, Prasad Tadepalli, Janardhan Rao Doppa, Xiaoli Fern, Thomas G. Dietterich
Scripts have been proposed to model the stereotypical event sequences found in narratives.
no code implementations • 30 Nov 2017 • Ryan Gary Kim, Janardhan Rao Doppa, Partha Pratim Pande, Diana Marculescu, Radu Marculescu
Tight collaboration between experts of machine learning and manycore system design is necessary to create a data-driven manycore design framework that integrates both learning and expert knowledge.
no code implementations • WS 2017 • Anjali Narayan-Chen, Colin Graber, Mayukh Das, Md. Rakibul Islam, Soham Dan, Sriraam Natarajan, Janardhan Rao Doppa, Julia Hockenmaier, Martha Palmer, Dan Roth
Agents that communicate back and forth with humans to help them execute non-linguistic tasks are a long sought goal of AI.
no code implementations • CVPR 2015 • Michael Lam, Janardhan Rao Doppa, Sinisa Todorovic, Thomas G. Dietterich
The mainstream approach to structured prediction problems in computer vision is to learn an energy function such that the solution minimizes that function.