no code implementations • 18 Oct 2023 • Yanming Kang, Giang Tran, Hans De Sterck
The overall complexity of Fast Multipole Attention is $\mathcal{O}(n)$ or $\mathcal{O}(n \log n)$, depending on whether the queries are down-sampled or not.
no code implementations • 6 Oct 2023 • Esha Saha, Giang Tran
Diffusion probabilistic models have been successfully used to generate data from noise.
1 code implementation • 11 Nov 2022 • Esha Saha, Lam Si Tung Ho, Giang Tran
The most popular tools for modelling and predicting infectious disease epidemics are compartmental models.
1 code implementation • 12 Apr 2022 • Nicholas Richardson, Hayden Schaeffer, Giang Tran
Signal decomposition and multiscale signal analysis provide many useful tools for time-frequency analysis.
1 code implementation • 6 Feb 2022 • Esha Saha, Hayden Schaeffer, Giang Tran
We prove that the HARFE method is guaranteed to converge with a given error bound depending on the noise and the parameters of the sparse ridge regression model.
no code implementations • 24 Aug 2021 • Lam Si Tung Ho, Nicholas Richardson, Giang Tran
In this paper, we propose an adaptive group Lasso deep neural network for high-dimensional function approximation where input data are generated from a dynamical system and the target function depends on few active variables or few linear combinations of variables.
2 code implementations • 4 Mar 2021 • Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel Ward
In particular, we provide generalization bounds for functions in a certain class (that is dense in a reproducing kernel Hilbert space) depending on the number of samples and the distribution of features.
no code implementations • 25 Nov 2018 • Lam Si Tung Ho, Hayden Schaeffer, Giang Tran, Rachel Ward
In this work, we study the problem of learning nonlinear functions from corrupted and dependent data.