no code implementations • 2 Mar 2023 • Sreelekha Guggilam, Varun Chandola, Abani Patra
For this, we propose the LAD Improved Iterative Training (LIIT), a novel training approach for ANN using large deviations principle to generate and iteratively update training samples in a fast and efficient setting.
no code implementations • 27 Nov 2022 • Syed Mohammed Arshad Zaidi, Varun Chandola, EunHye Yoo
Deep learning approaches for spatio-temporal prediction problems such as crowd-flow prediction assumes data to be of fixed and regular shaped tensor and face challenges of handling irregular, sparse data tensor.
no code implementations • 25 Nov 2022 • Amol Salunkhe, Georgios Georgalis, Abani Patra, Varun Chandola
We investigate two strategies of creating these ensemble models: one that keeps the flamelet origin information (Flamelets strategy) and one that ignores the origin and considers all the data independently (Points strategy).
no code implementations • 6 Nov 2022 • Amol Salunkhe, Dwyer Deighan, Paul DesJardin, Varun Chandola
Modeling of turbulent combustion system requires modeling the underlying chemistry and the turbulent flow.
no code implementations • 20 Feb 2022 • Amol Salunkhe, Dwyer Deighan, Paul DesJardin, Varun Chandola
Modeling of turbulent combustion system requires modeling the underlying chemistry and the turbulent flow.
no code implementations • 28 Sep 2021 • Sreelekha Guggilam, Varun Chandola, Abani Patra
We propose an anomaly detection algorithm that can scale to high-dimensional data using concepts from the theory of large deviations.
no code implementations • 24 Sep 2021 • Marc Böhlen, Varun Chandola, Wawan Sujarwo, Raunaq Jain
Image classifiers work effectively when applied on structured images, yet they often fail when applied on images with very high visual complexity.
1 code implementation • 1 Nov 2019 • Sreelekha Guggilam, Syed M. A. Zaidi, Varun Chandola, Abani K. Patra
Most current clustering based anomaly detection methods use scoring schema and thresholds to classify anomalies.
1 code implementation • 29 May 2019 • Sreelekha Guggilam, S. M. Arshad Zaidi, Varun Chandola, Abani Patra
Data-driven anomaly detection methods typically build a model for the normal behavior of the target system, and score each data instance with respect to this model.
no code implementations • 1 Oct 2018 • Duc Thanh Anh Luong, Varun Chandola
We study the behavior of a Time-Aware Long Short-Term Memory Autoencoder, a state-of-the-art method, in the context of learning latent representations from irregularly sampled patient data.
no code implementations • 24 Apr 2018 • Suchismit Mahapatra, Varun Chandola
We present theoretical results to show that the quality of a manifold asymptotically converges as the size of data increases.
no code implementations • 3 Apr 2018 • Jialiang Jiang, Sharon Hewner, Varun Chandola
However, we focus on assigning a readmission risk label to a patient based on their disease history.
no code implementations • 19 Feb 2018 • Frank Schoeneman, Varun Chandola, Nils Napp, Olga Wodo, Jaroslaw Zola
Scientific and engineering processes deliver massive high-dimensional data sets that are generated as non-linear transformations of an initial state and few process parameters.
1 code implementation • 23 Nov 2017 • Marc Böhlen, Varun Chandola, Amol Salunkhe
This paper follows the recent history of automated beauty competitions to discuss how machine learning techniques, in particular neural networks, alter the way attractiveness is handled and how this impacts the cultural landscape.
Computers and Society
no code implementations • 17 Oct 2017 • Suchismit Mahapatra, Varun Chandola
Manifold learning based methods have been widely used for non-linear dimensionality reduction (NLDR).
no code implementations • 13 Nov 2016 • Frank Schoeneman, Suchismit Mahapatra, Varun Chandola, Nils Napp, Jaroslaw Zola
In this paper, we argue that a stable manifold can be learned using only a fraction of the stream, and the remaining stream can be mapped to the manifold in a significantly less costly manner.