no code implementations • 8 Jan 2025 • Srikar Yellapragada, Kowshik Thopalli, Vivek Narayanaswamy, Wesam Sakla, Yang Liu, Yamen Mubarka, Dimitris Samaras, Jayaraman J. Thiagarajan
To that end, we propose a simple method that combines the special CLS token embedding commonly employed in ViTs with the average-pooled register embeddings to create feature representations which are subsequently used for training a downstream classifier.
no code implementations • 6 Sep 2024 • Banooqa Banday, Kowshik Thopalli, Tanzima Z. Islam, Jayaraman J. Thiagarajan
LLM-based data generation for real-world tabular data can be challenged by the lack of sufficient semantic context in feature names used to describe columns.
1 code implementation • 1 Aug 2024 • Rakshith Subramanyam, Kowshik Thopalli, Vivek Narayanaswamy, Jayaraman J. Thiagarajan
Reliably detecting when a deployed machine learning model is likely to fail on a given input is crucial for ensuring safe operation.
no code implementations • 29 Jun 2024 • Hongjun Choi, Jayaraman J. Thiagarajan, Ruben Glatt, Shusen Liu
In this work, we investigate the fundamental trade-off regarding accuracy and parameter efficiency in the parameterization of neural network weights using predictor networks.
no code implementations • 1 Jun 2024 • Vivek Narayanaswamy, Kowshik Thopalli, Rushil Anirudh, Yamen Mubarka, Wesam Sakla, Jayaraman J. Thiagarajan
Anchoring is a recent, architecture-agnostic principle for training deep neural networks that has been shown to significantly improve uncertainty estimation, calibration, and extrapolation capabilities.
no code implementations • 12 Apr 2024 • Joshua Feinglass, Jayaraman J. Thiagarajan, Rushil Anirudh, T. S. Jayram, Yezhou Yang
Current approaches in Generalized Zero-Shot Learning (GZSL) are built upon base models which consider only a single class attribute vector representation over the entire image.
no code implementations • 7 Jan 2024 • Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan
While graph neural networks (GNNs) are widely used for node and graph representation learning tasks, the reliability of GNN uncertainty estimates under distribution shifts remains relatively under-explored.
no code implementations • 6 Dec 2023 • Matthew L. Olson, Shusen Liu, Jayaraman J. Thiagarajan, Bogdan Kustowski, Weng-Keen Wong, Rushil Anirudh
Recent advances in machine learning, specifically transformer architecture, have led to significant advancements in commercial domains.
no code implementations • 20 Sep 2023 • Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan
Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI).
no code implementations • 20 Sep 2023 • Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Puja Trivedi, Rushil Anirudh
Safe deployment of AI models requires proactive detection of failures to prevent costly errors.
1 code implementation • 10 Jul 2023 • Rakshith Subramanyam, T. S. Jayram, Rushil Anirudh, Jayaraman J. Thiagarajan
In this paper, we explore the potential of Vision-Language Models (VLMs), specifically CLIP, in predicting visual object relationships, which involves interpreting visual features from images into language-based relations.
1 code implementation • 22 May 2023 • Kowshik Thopalli, Rakshith Subramanyam, Pavan Turaga, Jayaraman J. Thiagarajan
We argue that augmentations utilized by existing methods are insufficient to handle large distribution shifts, and hence propose a new approach SiSTA, which first fine-tunes a generative model from the source domain using a single-shot target, and then employs novel sampling strategies for curating synthetic target data.
no code implementations • 23 Mar 2023 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan
Overall, our work carefully studies the effectiveness of popular scoring functions in realistic settings and helps to better understand their limitations.
no code implementations • 23 Mar 2023 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan
Advances in the expressivity of pretrained models have increased interest in the design of adaptation protocols which enable safe and effective transfer learning.
1 code implementation • CVPR 2023 • Matthew L. Olson, Shusen Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Weng-Keen Wong
To this end, we introduce Cross-GAN Auditing (xGA) that, given an established "reference" GAN and a newly proposed "client" GAN, jointly identifies intelligible attributes that are either common across both GANs, novel to the client GAN, or missing from the client GAN.
no code implementations • ICCV 2023 • Jiaming Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Stewart He, K. Aditya Mohan, Ulugbek S. Kamilov, Hyojin Kim
Limited-Angle Computed Tomography (LACT) is a non-destructive evaluation technique used in a variety of applications ranging from security to medicine.
no code implementations • 30 Oct 2022 • Yuzhe Lu, Shusen Liu, Jayaraman J. Thiagarajan, Wesam Sakla, Rushil Anirudh
We present a fully automated framework for building object detectors on satellite imagery without requiring any human annotation or intervention.
1 code implementation • 29 Oct 2022 • Rakshith Subramanyam, Kowshik Thopalli, Spring Berman, Pavan Turaga, Jayaraman J. Thiagarajan
The problem of adapting models from a source domain using data from any target domain of interest has gained prominence, thanks to the brittle generalization in deep neural networks.
1 code implementation • 4 Aug 2022 • Puja Trivedi, Ekdeep Singh Lubana, Mark Heimann, Danai Koutra, Jayaraman J. Thiagarajan
Overall, our work rigorously contextualizes, both empirically and theoretically, the effects of data-centric properties on augmentation strategies and learning paradigms for graph SSL.
no code implementations • 26 Jul 2022 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan
While directly fine-tuning (FT) large-scale, pretrained models on task-specific data is well-known to induce strong in-distribution task performance, recent works have demonstrated that different adaptation protocols, such as linear probing (LP) prior to FT, can improve out-of-distribution generalization.
no code implementations • 25 Jul 2022 • Rakshith Subramanyam, Mark Heimann, Jayram Thathachar, Rushil Anirudh, Jayaraman J. Thiagarajan
Model agnostic meta-learning algorithms aim to infer priors from several observed tasks that can then be used to adapt to a new task with few examples.
1 code implementation • 14 Jul 2022 • Jayaraman J. Thiagarajan, Rushil Anirudh, Vivek Narayanaswamy, Peer-Timo Bremer
We are interested in estimating the uncertainties of deep neural networks, which play an important role in many scientific and engineering problems.
no code implementations • 12 Jul 2022 • Vivek Narayanaswamy, Yamen Mubarka, Rushil Anirudh, Deepta Rajan, Andreas Spanias, Jayaraman J. Thiagarajan
We focus on the problem of producing well-calibrated out-of-distribution (OOD) detectors, in order to enable safe deployment of medical image classifiers.
1 code implementation • 9 Jul 2022 • Kowshik Thopalli, Pavan Turaga, Jayaraman J. Thiagarajan
With a minimal overhead of storing the subspace basis set for the source data, CATTAn enables unsupervised alignment between source and target data during adaptation.
3 code implementations • 8 Jul 2022 • Rushil Anirudh, Jayaraman J. Thiagarajan
Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 15 Jun 2022 • Tejas Gokhale, Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Chitta Baral, Yezhou Yang
To be successful in single source domain generalization, maximizing diversity of synthesized domains has emerged as one of the most effective strategies.
1 code implementation • 17 Dec 2021 • Kowshik Thopalli, Sameeksha Katoch, Pavan Turaga, Jayaraman J. Thiagarajan
In this paper, we focus on the challenging problem of multi-source zero shot DG (MDG), where labeled training data from multiple source domains is available but with no access to data from the target domain.
Ranked #27 on Domain Generalization on TerraIncognita
no code implementations • 24 Nov 2021 • Ankita Shukla, Rushil Anirudh, Eugene Kur, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Brian K. Spears, Tammy Ma, Pavan Turaga
In this paper, we develop a Wasserstein autoencoder (WAE) with a hyperspherical prior for multimodal data in the application of inertial confinement fusion.
no code implementations • 5 Oct 2021 • Rushil Anirudh, Jayaraman J. Thiagarajan
Anchoring works by first transforming the input into a tuple consisting of an anchor point drawn from a prior distribution, and a combination of the input sample with the anchor using a pretext encoding scheme.
no code implementations • 29 Sep 2021 • Harsh Bhatia, Jayaraman J. Thiagarajan, Rushil Anirudh, T.S. Jayram, Tomas Oppelstrup, Helgi I. Ingolfsson, Felice C Lightstone, Peer-Timo Bremer
Complex scientific inquiries rely increasingly upon large and autonomous multiscale simulation campaigns, which fundamentally require similarity metrics to quantify "sufficient'' changes among data and/or configurations.
no code implementations • NeurIPS 2021 • Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Deepta Rajan, Jason Liang, Akshay Chaudhari, Andreas Spanias
Explanation techniques that synthesize small, interpretable changes to a given image while producing desired changes in the model prediction have become popular for introspecting black-box models.
1 code implementation • 29 Sep 2021 • Alexandros Karargyris, Renato Umeton, Micah J. Sheller, Alejandro Aristizabal, Johnu George, Srini Bala, Daniel J. Beutel, Victor Bittorf, Akshay Chaudhari, Alexander Chowdhury, Cody Coleman, Bala Desinghu, Gregory Diamos, Debo Dutta, Diane Feddema, Grigori Fursin, Junyi Guo, Xinyuan Huang, David Kanter, Satyananda Kashyap, Nicholas Lane, Indranil Mallick, Pietro Mascagni, Virendra Mehta, Vivek Natarajan, Nikola Nikolov, Nicolas Padoy, Gennady Pekhimenko, Vijay Janapa Reddi, G Anthony Reina, Pablo Ribalta, Jacob Rosenthal, Abhishek Singh, Jayaraman J. Thiagarajan, Anna Wuest, Maria Xenochristou, Daguang Xu, Poonam Yadav, Michael Rosenthal, Massimo Loda, Jason M. Johnson, Peter Mattson
Medical AI has tremendous potential to advance healthcare by supporting the evidence-based practice of medicine, personalizing patient treatment, reducing costs, and improving provider and patient experience.
no code implementations • 29 Sep 2021 • Puja Trivedi, Mark Heimann, Danai Koutra, Jayaraman J. Thiagarajan
Using the recent population augmentation graph-based analysis of self-supervised learning, we show theoretically that the success of GCL with popular augmentations is bounded by the graph edit distance between different classes.
no code implementations • 19 Apr 2021 • Bogdan Kustowski, Jim A. Gaffney, Brian K. Spears, Gemma J. Anderson, Rushil Anirudh, Peer-Timo Bremer, Jayaraman J. Thiagarajan, Michael K. G. Kruse, Ryan C. Nora
The method described in this paper can be applied to a wide range of problems that require transferring knowledge from simulations to the domain of experiments.
1 code implementation • 14 Apr 2021 • Vivek Sivaraman Narayanaswamy, Jayaraman J. Thiagarajan, Andreas Spanias
Unsupervised deep learning methods for solving audio restoration problems extensively rely on carefully tailored neural architectures that carry strong inductive biases for defining priors in the time or spectral domain.
no code implementations • 5 Mar 2021 • Vivek Narayanaswamy, Jayaraman J. Thiagarajan, Deepta Rajan, Andreas Spanias
With increased interest in adopting AI methods for clinical diagnosis, a vital step towards safe deployment of such tools is to ensure that the models not only produce accurate predictions but also do not generalize to data regimes where the training data provide no meaningful evidence.
no code implementations • 12 Feb 2021 • Nathan Pinnow, Tarek Ramadan, Tanzima Z. Islam, Chase Phelps, Jayaraman J. Thiagarajan
Performance analysis has always been an afterthought during the application development process, focusing on application correctness first.
3 code implementations • 3 Dec 2020 • Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, Yezhou Yang
While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes.
no code implementations • NeurIPS 2020 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Jize Zhang, Yi Zhou, Timo Bremer
Using this framework, we show that space-filling sample designs, such as blue noise and Poisson disk sampling, which optimize spectral properties, outperform random designs in terms of the generalization gap and characterize this gain in a closed-form.
no code implementations • 26 Oct 2020 • Gemma J. Anderson, Jim A. Gaffney, Brian K. Spears, Peer-Timo Bremer, Rushil Anirudh, Jayaraman J. Thiagarajan
Large-scale numerical simulations are used across many scientific disciplines to facilitate experimental development and provide insights into underlying physical processes, but they come with a significant computational cost.
no code implementations • 22 Oct 2020 • Vivek Narayanaswamy, Jayaraman J. Thiagarajan, Andreas Spanias
Through the use of carefully tailored convolutional neural network architectures, a deep image prior (DIP) can be used to obtain pre-images from latent representation encodings.
no code implementations • 16 Oct 2020 • Jayaraman J. Thiagarajan, Peer-Timo Bremer, Rushil Anirudh, Timothy C. Germann, Sara Y. Del Valle, Frederick H. Streitz
A crucial aspect of managing a public health crisis is to effectively balance prevention and mitigation strategies, while taking their socio-economic impact into account.
no code implementations • 13 Oct 2020 • Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Timothy C. Germann, Sara Y. Del Valle, Frederick H. Streitz
Here we present a new approach to calibrate an agent-based model -- EpiCast -- using a large set of simulation ensembles for different major metropolitan areas of the United States.
no code implementations • 30 Sep 2020 • Bindya Venkatesh, Jayaraman J. Thiagarajan
Deep predictive models rely on human supervision in the form of labeled training data.
no code implementations • 30 Sep 2020 • Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Rushil Anirudh, Peer-Timo Bremer, Andreas Spanias
With increasing reliance on the outcomes of black-box models in critical applications, post-hoc explainability tools that do not require access to the model internals are often used to enable humans understand and trust these models.
no code implementations • 30 Sep 2020 • Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias
In this work, we propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models, particularly against poisoning attacks to the graph structure, by leveraging epistemic uncertainties from the message passing framework.
1 code implementation • 28 May 2020 • Vivek Narayanaswamy, Jayaraman J. Thiagarajan, Rushil Anirudh, Andreas Spanias
State-of-the-art under-determined audio source separation systems rely on supervised end-end training of carefully tailored neural network architectures operating either in the time or the spectral domain.
no code implementations • 5 May 2020 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Rushil Anirudh, Peer-Timo Bremer, Jim Gaffney, Gemma Anderson, Brian Spears
Predictive models that accurately emulate complex scientific processes can achieve exponential speed-ups over numerical simulators or experiments, and at the same time provide surrogates for improving the subsequent analysis.
no code implementations • 3 May 2020 • Deepta Rajan, Jayaraman J. Thiagarajan, Alexandros Karargyris, Satyananda Kashyap
Automated diagnostic assistants in healthcare necessitate accurate AI models that can be trained with limited labeled data, can cope with severe class imbalances and can support simultaneous prediction of multiple disease conditions.
no code implementations • 27 Apr 2020 • Jayaraman J. Thiagarajan, Prasanna Sattigeri, Deepta Rajan, Bindya Venkatesh
The wide-spread adoption of representation learning technologies in clinical decision making strongly emphasizes the need for characterizing model reliability and enabling rigorous introspection of model behavior.
no code implementations • 10 Feb 2020 • Bindya Venkatesh, Jayaraman J. Thiagarajan, Kowshik Thopalli, Prasanna Sattigeri
The hypothesis that sub-network initializations (lottery) exist within the initializations of over-parameterized networks, which when trained in isolation produce highly generalizable models, has led to crucial insights into network initialization and has enabled efficient inferencing.
1 code implementation • 17 Dec 2019 • Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Brian K. Spears
Neural networks have become very popular in surrogate modeling because of their ability to characterize arbitrary, high dimensional functions in a data driven fashion.
no code implementations • 16 Dec 2019 • Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
However, PGD is a brittle optimization technique that fails to identify the right projection (or latent vector) when the observation is corrupted, or perturbed even by a small amount.
no code implementations • 24 Nov 2019 • Sameeksha Katoch, Kowshik Thopalli, Jayaraman J. Thiagarajan, Pavan Turaga, Andreas Spanias
Exploiting known semantic relationships between fine-grained tasks is critical to the success of recent model agnostic approaches.
no code implementations • 30 Oct 2019 • Bindya Venkatesh, Jayaraman J. Thiagarajan
The role of uncertainty quantification (UQ) in deep learning has become crucial with growing use of predictive models in high-risk applications.
no code implementations • 30 Oct 2019 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Deepta Rajan
Calibration error is commonly adopted for evaluating the quality of uncertainty estimators in deep neural networks.
2 code implementations • 3 Oct 2019 • Rushil Anirudh, Jayaraman J. Thiagarajan, Shusen Liu, Peer-Timo Bremer, Brian K. Spears
There is significant interest in using modern neural networks for scientific applications due to their effectiveness in modeling highly complex, non-linear problems in a data-driven fashion.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Rushil Anirudh, Hyojin Kim, Jayaraman J. Thiagarajan, K. Aditya Mohan, Kyle M. Champley
Limited angle CT reconstruction is an under-determined linear inverse problem that requires appropriate regularization techniques to be solved.
1 code implementation • 25 Sep 2019 • Shusen Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer
We present function preserving projections (FPP), a scalable linear projection technique for discovering interpretable relationships in high-dimensional data.
1 code implementation • 9 Sep 2019 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Prasanna Sattigeri, Peer-Timo Bremer
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties.
no code implementations • 26 Jul 2019 • Jayaraman J. Thiagarajan, Satyananda Kashyap, Alexandros Karagyris
Weakly supervised instance labeling using only image-level labels, in lieu of expensive fine-grained pixel annotations, is crucial in several applications including medical image analysis.
2 code implementations • 19 Jul 2019 • Shusen Liu, Di Wang, Dan Maljovec, Rushil Anirudh, Jayaraman J. Thiagarajan, Sam Ade Jacobs, Brian C. Van Essen, David Hysom, Jae-Seung Yeom, Jim Gaffney, Luc Peterson, Peter B. Robinson, Harsh Bhatia, Valerio Pascucci, Brian K. Spears, Peer-Timo Bremer
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization.
no code implementations • 11 Jun 2019 • Kowshik Thopalli, Jayaraman J. Thiagarajan, Rushil Anirudh, Pavan Turaga
This paper represents a hybrid approach, where we assume simplified data geometry in the form of subspaces, and consider alignment as an auxiliary task to the primary task of maximizing performance on the source.
no code implementations • 6 Jun 2019 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Peer-Timo Bremer
This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models.
no code implementations • 8 Apr 2019 • Vivek Sivaraman Narayanaswamy, Sameeksha Katoch, Jayaraman J. Thiagarajan, Huan Song, Andreas Spanias
We also investigate the impact of dense connections on the extraction process that encourage feature reuse and better gradient flow.
no code implementations • 20 Nov 2018 • Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
Solving inverse problems continues to be a central challenge in computer vision.
no code implementations • 11 Nov 2018 • Kowshik Thopalli, Rushil Anirudh, Jayaraman J. Thiagarajan, Pavan Turaga
We present a novel unsupervised domain adaptation (DA) method for cross-domain visual recognition.
no code implementations • 1 Nov 2018 • Vivek Sivaraman Narayanaswamy, Jayaraman J. Thiagarajan, Huan Song, Andreas Spanias
State-of-the-art speaker diarization systems utilize knowledge from external data, in the form of a pre-trained distance metric, to effectively determine relative speaker identities to unseen data.
no code implementations • 1 Nov 2018 • Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias
Machine learning models that can exploit the inherent structure in data have gained prominence.
no code implementations • 31 Oct 2018 • Jayaraman J. Thiagarajan, Irene Kim, Rushil Anirudh, Peer-Timo Bremer
Techniques for understanding the functioning of complex machine learning models are becoming increasingly popular, not only to improve the validation process, but also to extract new insights about the data via exploratory analysis.
no code implementations • 31 Oct 2018 • Jayaraman J. Thiagarajan, Rushil Anirudh, Rahul Sridhar, Peer-Timo Bremer
Unsupervised dimension selection is an important problem that seeks to reduce dimensionality of data, while preserving the most useful characteristics.
no code implementations • 2 Oct 2018 • Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Huan Song, Andreas Spanias
Though deep network embeddings, e. g. DeepWalk, are widely adopted for community discovery, we argue that feature learning with random node attributes, using graph neural networks, can be more effective.
no code implementations • 20 Sep 2018 • Jayaraman J. Thiagarajan, Deepta Rajan, Prasanna Sattigeri
The hypothesis that computational models can be reliable enough to be adopted in prognosis and patient care is revolutionizing healthcare.
no code implementations • 20 Sep 2018 • Huan Song, Jayaraman J. Thiagarajan
Inferencing with network data necessitates the mapping of its nodes into a vector space, where the relationships are preserved.
1 code implementation • 5 Sep 2018 • Gowtham Muniraju, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Cihan Tepedelenlioglu, Andreas Spanias
Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning, and sequential optimization has become a popular solution.
no code implementations • 4 Aug 2018 • Huan Song, Megan Willi, Jayaraman J. Thiagarajan, Visar Berisha, Andreas Spanias
In automatic speech processing systems, speaker diarization is a crucial front-end component to separate segments from different speakers.
no code implementations • 18 May 2018 • Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
We solve this by making successive estimates on the model and the solution in an iterative fashion.
no code implementations • 18 Feb 2018 • Deepta Rajan, Jayaraman J. Thiagarajan
Processing temporal sequences is central to a variety of applications in health care, and in particular multi-channel Electrocardiogram (ECG) is a highly prevalent diagnostic modality that relies on robust sequence modeling.
no code implementations • 19 Dec 2017 • Jayaraman J. Thiagarajan, Shusen Liu, Karthikeyan Natesan Ramamurthy, Peer-Timo Bremer
Furthermore, we introduce a new approach to discover a diverse set of high quality linear projections and show that in practice the information of $k$ linear projections is often jointly encoded in $\sim k$ axis aligned plots.
no code implementations • 16 Dec 2017 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Charvi Rastogi, Pramod K. Varshney, Peer-Timo Bremer
Third, we propose an efficient estimator to evaluate the space-filling properties of sample designs in arbitrary dimensions and use it to develop an optimization framework to generate high quality space-filling designs.
no code implementations • CVPR 2018 • Rushil Anirudh, Hyojin Kim, Jayaraman J. Thiagarajan, K. Aditya Mohan, Kyle Champley, Timo Bremer
The classical techniques require measuring projections, called sinograms, from a full 180$^\circ$ view of the object.
no code implementations • 15 Nov 2017 • Rushil Anirudh, Jayaraman J. Thiagarajan, Rahul Sridhar, Peer-Timo Bremer
Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user.
no code implementations • 15 Nov 2017 • Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Andreas Spanias
To this end, we develop the DKMO (Deep Kernel Machine Optimization) framework, that creates an ensemble of dense embeddings using Nystrom kernel approximations and utilizes deep learning to generate task-specific representations through the fusion of the embeddings.
no code implementations • 10 Nov 2017 • Huan Song, Deepta Rajan, Jayaraman J. Thiagarajan, Andreas Spanias
With widespread adoption of electronic health records, there is an increased emphasis for predictive models that can effectively deal with clinical time-series data.
no code implementations • 24 Apr 2017 • Rushil Anirudh, Jayaraman J. Thiagarajan
To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction.
no code implementations • 28 Dec 2016 • Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Andreas Spanias
Kernel fusion is a popular and effective approach for combining multiple features that characterize different aspects of data.
no code implementations • 14 Dec 2016 • Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Bhavya Kailkhura
In this paper, we propose the use of quantile analysis to obtain local scale estimates for neighborhood graph construction.
no code implementations • 30 Nov 2016 • Qunwei Li, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Zhenliang Zhang, Pramod K. Varshney
Influential node detection is a central research topic in social network analysis.
no code implementations • 29 Nov 2016 • Rushil Anirudh, Jayaraman J. Thiagarajan, Irene Kim, Wolfgang Polonik
We present an approach to model time series data from resting state fMRI for autism spectrum disorder (ASD) severity classification.
no code implementations • 22 Nov 2016 • Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy
In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy.
no code implementations • 22 Jan 2016 • Prashant Khanduri, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Pramod K. Varshney
This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration).
no code implementations • 12 Nov 2015 • Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin, Jayaraman J. Thiagarajan
However, loss functions such as quantile and quantile Huber generalize the symmetric $\ell_1$ and Huber losses to the asymmetric setting, for a fixed quantile parameter.
no code implementations • 26 Mar 2014 • Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin, Jayaraman J. Thiagarajan
We propose an algorithm to learn dictionaries and obtain sparse codes when the data reconstruction fidelity is measured using any smooth PLQ cost function.
no code implementations • 12 Mar 2013 • Karthikeyan Natesan Ramamurthy, Jayaraman J. Thiagarajan, Andreas Spanias
For case (c), we propose the combined orthogonal matching pursuit algorithm for coefficient recovery and derive the deterministic sparsity threshold under which recovery of the unique, sparsest coefficient vector is possible.
no code implementations • 3 Mar 2013 • Jayaraman J. Thiagarajan, Karthikeyan Natesan Ramamurthy, Andreas Spanias
Algorithmic stability and generalization are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries which can efficiently model any test data similar to the training samples.
no code implementations • 3 Mar 2013 • Jayaraman J. Thiagarajan, Karthikeyan Natesan Ramamurthy, Andreas Spanias
Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods.