1 code implementation • 7 Jan 2024 • Majd Al Aawar, Srikar Mutnuri, Mansooreh Montazerin, Ajitesh Srivastava
The current methods for predicting the spread of new variants rely on statistical modeling, however, these methods work only when the new variant has already arrived in the region of interest and has a significant prevalence.
no code implementations • 7 Sep 2023 • Ajitesh Srivastava
Measuring distance or similarity between time-series data is a fundamental aspect of many applications including classification, clustering, and ensembling/alignment.
no code implementations • 3 Sep 2023 • Sarthak Kumar Maharana, Krishna Kamal Adidam, Shoumik Nandi, Ajitesh Srivastava
We demonstrate the impact of different pre-trained features on this challenging AAI task, at low-resource conditions.
no code implementations • 30 Oct 2022 • James Orme-Rogers, Ajitesh Srivastava
The traditional methods for detecting autism spectrum disorder (ASD) are expensive, subjective, and time-consuming, often taking years for a diagnosis, with many children growing well into adolescence and even adulthood before finally confirming the disorder.
no code implementations • 6 Jul 2022 • Ajitesh Srivastava
This paper presents the evolution of the SIkJalpha model and its many versions that have been used to submit to these collaborative efforts since the beginning of the pandemic.
1 code implementation • 17 Jun 2022 • Majd Al Aawar, Ajitesh Srivastava
We demonstrate that our Random Forest-based approach is able to improve upon the forecasts of the individual predictors in terms of mean absolute error, coverage, and weighted interval score.
no code implementations • 29 May 2022 • Pengmiao Zhang, Ajitesh Srivastava, Anant V. Nori, Rajgopal Kannan, Viktor K. Prasanna
Data Prefetching is a technique that can hide memory latency by fetching data before it is needed by a program.
1 code implementation • 1 May 2022 • Pengmiao Zhang, Ajitesh Srivastava, Anant V. Nori, Rajgopal Kannan, Viktor K. Prasanna
To reduce vocabulary size, we use fine-grained address segmentation as input.
1 code implementation • NeurIPS 2021 • Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
We propose a design principle to decouple the depth and scope of GNNs -- to generate representation of a target entity (i. e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph.
Ranked #3 on Node Classification on Reddit
1 code implementation • 10 May 2021 • Hongkuan Zhou, Ajitesh Srivastava, Hanqing Zeng, Rajgopal Kannan, Viktor Prasanna
In this paper, we propose to accelerate GNN inference by pruning the dimensions in each layer with negligible accuracy loss.
no code implementations • 4 Feb 2021 • Ajitesh Srivastava, Tianjian Xu, Viktor K. Prasanna
In this paper, we introduce a prototype of EpiBench which is currently running and accepting submissions for the task of forecasting COVID-19 cases and deaths in the US states and We demonstrate that we can utilize the prototype to develop an ensemble relying on fully automated epidemic forecasts (no human intervention) that reaches human-expert level ensemble currently being used by the CDC.
2 code implementations • 2 Dec 2020 • Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph.
2 code implementations • 5 Oct 2020 • Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
For feature propagation within subgraphs, we improve cache utilization and reduce DRAM traffic by data partitioning.
4 code implementations • 10 Jul 2020 • Ajitesh Srivastava, Tianjian Xu, Viktor K. Prasanna
Many of these methods are based on traditional epidemiological model which rely on simulations or Bayesian inference to simultaneously learn many parameters at a time.
1 code implementation • 3 Jun 2020 • Ajitesh Srivastava, Viktor K. Prasanna
A critical factor that can hinder accurate long-term forecasts, is the number of unreported/asymptomatic cases.
1 code implementation • 23 Apr 2020 • Ajitesh Srivastava, Viktor K. Prasanna
In particular, we show that changes in model parameters over time can help us quantify how well a state or a country has responded to the epidemic.
no code implementations • 17 Mar 2020 • Ajitesh Srivastava, Naifeng Zhang, Rajgopal Kannan, Viktor K. Prasanna
More desirable is a high-level language where the domain-specialist simply specifies the workload in terms of high-level operations (e. g., matrix-multiply(A, B)), and the compiler identifies the best implementation fully utilizing the heterogeneous platform.
no code implementations • 16 Oct 2019 • Yue Niu, Hanqing Zeng, Ajitesh Srivastava, Kartik Lakhotia, Rajgopal Kannan, Yanzhi Wang, Viktor Prasanna
On the other hand, weight pruning techniques address the redundancy in model parameters by converting dense convolutional kernels into sparse ones.
7 code implementations • ICLR 2020 • Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs.
Ranked #1 on Link Property Prediction on ogbl-citation2
2 code implementations • 28 Oct 2018 • Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
However, a major challenge is to reduce the complexity of layered GCNs and make them parallelizable and scalable on very large graphs -- state-of the art techniques are unable to achieve scalability without losing accuracy and efficiency.