2 code implementations • 12 Mar 2024 • Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, Yuyang Wang
We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models.
1 code implementation • NeurIPS 2023 • Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, Yuyang Wang
Prior works on time series diffusion models have primarily focused on developing conditional models tailored to specific forecasting or imputation tasks.
1 code implementation • 7 Mar 2023 • Alvin Heng, Abdul Fatir Ansari, Harold Soh
We present Flow-Guided Density Ratio Learning (FDRL), a simple and scalable approach to generative modeling which builds on the stale (time-independent) approximation of the gradient flow of entropy-regularized f-divergences introduced in DGflow.
1 code implementation • 26 Jan 2023 • Abdul Fatir Ansari, Alvin Heng, Andre Lim, Harold Soh
Learning accurate predictive models of real-world dynamic phenomena (e. g., climate, biological) remains a challenging task.
1 code implementation • NeurIPS 2021 • Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner Turkmen, Harold Soh, Alexander J. Smola, Yuyang Wang, Tim Januschowski
We propose the Recurrent Explicit Duration Switching Dynamical System (RED-SDS), a flexible model that is capable of identifying both state- and time-dependent switching dynamics.
1 code implementation • ICLR 2021 • Abdul Fatir Ansari, Ming Liang Ang, Harold Soh
We introduce Discriminator Gradient flow (DGflow), a new technique that improves generated samples via the gradient flow of entropy-regularized f-divergences between the real and the generated data distributions.
Ranked #1 on Text Generation on One Billion Word
1 code implementation • CVPR 2020 • Abdul Fatir Ansari, Jonathan Scarlett, Harold Soh
In this paper, we formulate the problem of learning an IGM as minimizing the expected distance between characteristic functions.
2 code implementations • 12 Sep 2018 • Abdul Fatir Ansari, Harold Soh
We address the problem of unsupervised disentanglement of latent representations learnt via deep generative models.