no code implementations • NeurIPS 2023 • Abdullah Alomar, Munther Dahleh, Sean Mann, Devavrat Shah
However, a theoretical underpinning of multi-stage learning algorithms involving both deterministic and stationary components has been absent in the literature despite its pervasiveness.
1 code implementation • 5 Jan 2022 • Abdullah Alomar, Pouya Hamadanian, Arash Nasr-Esfahany, Anish Agarwal, Mohammad Alizadeh, Devavrat Shah
Key to CausalSim is mapping unbiased trace-driven simulation to a tensor completion problem with extremely sparse observations.
no code implementations • NeurIPS 2021 • Arwa Alanqary, Abdullah Alomar, Devavrat Shah
The change point in such a setting corresponds to a change in the underlying spatio-temporal model.
no code implementations • NeurIPS 2021 • Anish Agarwal, Abdullah Alomar, Varkey Alumootil, Devavrat Shah, Dennis Shen, Zhi Xu, Cindy Yang
We consider offline reinforcement learning (RL) with heterogeneous agents under severe data scarcity, i. e., we only observe a single historical trajectory for every agent under an unknown, potentially sub-optimal policy.
no code implementations • 24 Jun 2020 • Anish Agarwal, Abdullah Alomar, Devavrat Shah
We introduce and analyze a variant of multivariate singular spectrum analysis (mSSA), a popular time series method to impute and forecast a multivariate time series.
no code implementations • 30 Apr 2020 • Anish Agarwal, Abdullah Alomar, Arnab Sarker, Devavrat Shah, Dennis Shen, Cindy Yang
In essence, the method leverages information from different interventions that have already been enacted across the world and fits it to a policy maker's setting of interest, e. g., to estimate the effect of mobility-restricting interventions on the U. S., we use daily death data from countries that enforced severe mobility restrictions to create a "synthetic low mobility U. S." and predict the counterfactual trajectory of the U. S. if it had indeed applied a similar intervention.
no code implementations • 17 Mar 2019 • Anish Agarwal, Abdullah Alomar, Devavrat Shah
Computationally, tspDB is 59-62x and 94-95x faster compared to LSTM and DeepAR in terms of median ML model training time and prediction query latency, respectively.