no code implementations • 5 Feb 2024 • Weihan Li, Chengrui Li, Yule Wang, Anqi Wu
Consequently, the model achieves a linear inference cost over time points and provides an interpretable low-dimensional representation, revealing communication directions across brain regions and separating oscillatory communications into different frequency bands.
no code implementations • 2 Feb 2024 • Chengrui Li, Weihan Li, Yule Wang, Anqi Wu
For (1), we propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works.
no code implementations • 4 Nov 2023 • Chengrui Li, Yule Wang, Weihan Li, Anqi Wu
Maximizing the log-likelihood is a crucial aspect of learning latent variable models, and variational inference (VI) stands as the commonly adopted method.
no code implementations • 23 Oct 2023 • Chengrui Li, Soon Ho Kim, Chris Rodgers, Hannah Choi, Anqi Wu
We introduce both a Gaussian prior and a one-hot prior over the GLM in each state.
1 code implementation • 9 Jun 2023 • Yule Wang, Zijing Wu, Chengrui Li, Anqi Wu
Specifically, the latent dynamics structures of the source domain are first extracted by a diffusion model.
1 code implementation • 11 Nov 2022 • Chengrui Li, Anqi Wu
To deal with very noisy data with weak correlations, we propose two solutions -- blockwise and geodesic -- to make use of locally correlated data points and provide better and numerically more stable latent estimations.