Search Results for author: Giang Tran

Found 9 papers, 4 papers with code

Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for Long Sequences

no code implementations18 Oct 2023 Yanming Kang, Giang Tran, Hans De Sterck

The overall complexity of Fast Multipole Attention is $\mathcal{O}(n)$ or $\mathcal{O}(n \log n)$, depending on whether the queries are down-sampled or not.

Language Modelling

Diffusion Random Feature Model

no code implementations6 Oct 2023 Esha Saha, Giang Tran

Diffusion probabilistic models have been successfully used to generate data from noise.

Generalization Bounds

SPADE4: Sparsity and Delay Embedding based Forecasting of Epidemics

1 code implementation11 Nov 2022 Esha Saha, Lam Si Tung Ho, Giang Tran

The most popular tools for modelling and predicting infectious disease epidemics are compartmental models.

SRMD: Sparse Random Mode Decomposition

1 code implementation12 Apr 2022 Nicholas Richardson, Hayden Schaeffer, Giang Tran

Signal decomposition and multiscale signal analysis provide many useful tools for time-frequency analysis.

Time Series Time Series Analysis

HARFE: Hard-Ridge Random Feature Expansion

1 code implementation6 Feb 2022 Esha Saha, Hayden Schaeffer, Giang Tran

We prove that the HARFE method is guaranteed to converge with a given error bound depending on the noise and the parameters of the sparse ridge regression model.

regression

Adaptive Group Lasso Neural Network Models for Functions of Few Variables and Time-Dependent Data

no code implementations24 Aug 2021 Lam Si Tung Ho, Nicholas Richardson, Giang Tran

In this paper, we propose an adaptive group Lasso deep neural network for high-dimensional function approximation where input data are generated from a dynamical system and the target function depends on few active variables or few linear combinations of variables.

Generalization Bounds for Sparse Random Feature Expansions

2 code implementations4 Mar 2021 Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel Ward

In particular, we provide generalization bounds for functions in a certain class (that is dense in a reproducing kernel Hilbert space) depending on the number of samples and the distribution of features.

BIG-bench Machine Learning Compressive Sensing +1

Recovery guarantees for polynomial approximation from dependent data with outliers

no code implementations25 Nov 2018 Lam Si Tung Ho, Hayden Schaeffer, Giang Tran, Rachel Ward

In this work, we study the problem of learning nonlinear functions from corrupted and dependent data.

Cannot find the paper you are looking for? You can Submit a new open access paper.