Search Results for author: Abhimanyu Das

Found 18 papers, 1 papers with code

Transformers can optimally learn regression mixture models

no code implementations14 Nov 2023 Reese Pathak, Rajat Sen, Weihao Kong, Abhimanyu Das

In this work, we investigate the hypothesis that transformers can learn an optimal predictor for mixtures of regressions.

regression

A decoder-only foundation model for time-series forecasting

no code implementations14 Oct 2023 Abhimanyu Das, Weihao Kong, Rajat Sen, Yichen Zhou

Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset.

Time Series Time Series Forecasting

Linear Regression using Heterogeneous Data Batches

no code implementations5 Sep 2023 Ayush Jain, Rajat Sen, Weihao Kong, Abhimanyu Das, Alon Orlitsky

A common approach assumes that the sources fall in one of several unknown subgroups, each with an unknown input distribution and input-output relationship.

regression

Long-term Forecasting with TiDE: Time-series Dense Encoder

2 code implementations17 Apr 2023 Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan Mathur, Rajat Sen, Rose Yu

Recent work has shown that simple linear models can outperform several Transformer based approaches in long term time-series forecasting.

Anomaly Detection Time Series +1

Efficient List-Decodable Regression using Batches

no code implementations23 Nov 2022 Abhimanyu Das, Ayush Jain, Weihao Kong, Rajat Sen

We begin the study of list-decodable linear regression using batches.

regression

Dirichlet Proportions Model for Hierarchically Coherent Probabilistic Forecasting

no code implementations21 Apr 2022 Abhimanyu Das, Weihao Kong, Biswajit Paria, Rajat Sen

Probabilistic, hierarchically coherent forecasting is a key problem in many practical forecasting applications -- the goal is to obtain coherent probabilistic predictions for a large number of time series arranged in a pre-specified tree hierarchy.

STS Time Series +1

Leveraging Initial Hints for Free in Stochastic Linear Bandits

no code implementations8 Mar 2022 Ashok Cutkosky, Chris Dann, Abhimanyu Das, Qiuyi, Zhang

We study the setting of optimizing with bandit feedback with additional prior knowledge provided to the learner in the form of an initial hint of the optimal action.

A Convergence Analysis of Gradient Descent on Graph Neural Networks

no code implementations NeurIPS 2021 Pranjal Awasthi, Abhimanyu Das, Sreenivas Gollapudi

Graph Neural Networks~(GNNs) are a powerful class of architectures for solving learning problems on graphs.

On the benefits of maximum likelihood estimation for Regression and Forecasting

no code implementations ICLR 2022 Pranjal Awasthi, Abhimanyu Das, Rajat Sen, Ananda Theertha Suresh

We also demonstrate empirically that our method instantiated with a well-designed general purpose mixture likelihood family can obtain superior performance for a variety of tasks across time-series forecasting and regression datasets with different data distributions.

regression Time Series +1

Hierarchically Regularized Deep Forecasting

no code implementations14 Jun 2021 Biswajit Paria, Rajat Sen, Amr Ahmed, Abhimanyu Das

Hierarchical forecasting is a key problem in many practical multivariate forecasting applications - the goal is to simultaneously predict a large number of correlated time series that are arranged in a pre-specified aggregation hierarchy.

Time Series Time Series Analysis

Beyond GNNs: A Sample Efficient Architecture for Graph Problems

no code implementations1 Jan 2021 Pranjal Awasthi, Abhimanyu Das, Sreenivas Gollapudi

Finally, we empirically demonstrate the effectiveness of our proposed architecture for a variety of graph problems.

Generalization Bounds

Learning the gravitational force law and other analytic functions

no code implementations15 May 2020 Atish Agarwala, Abhimanyu Das, Rina Panigrahy, Qiuyi Zhang

We present experimental evidence that the many-body gravitational force function is easier to learn with ReLU networks as compared to networks with exponential activations.

On the Learnability of Deep Random Networks

no code implementations8 Apr 2019 Abhimanyu Das, Sreenivas Gollapudi, Ravi Kumar, Rina Panigrahy

In this paper we study the learnability of deep random networks from both theoretical and practical points of view.

Selecting Diverse Features via Spectral Regularization

no code implementations NeurIPS 2012 Abhimanyu Das, Anirban Dasgupta, Ravi Kumar

We compare our algorithms to traditional greedy and $\ell_1$-regularization schemes and show that we obtain a more diverse set of features that result in the regression problem being stable under perturbations.

feature selection regression

Cannot find the paper you are looking for? You can Submit a new open access paper.